Test Report: Hyper-V_Windows 19876

                    
                      0db15b506654906b6081fade5258c34c52419f7c:2024-10-28:36841
                    
                

Test fail (23/206)

x
+
TestErrorSpam/setup (204.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-046700 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-046700 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 --driver=hyperv: (3m24.5134438s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube VM"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-046700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=19876
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-046700" primary control-plane node in "nospam-046700" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-046700" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (204.51s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (36.34s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-150200 -n functional-150200
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-150200 -n functional-150200: (12.8717661s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 logs -n 25: (9.269104s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-046700 --log_dir                                     | nospam-046700     | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:02 UTC | 28 Oct 24 11:02 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-046700 --log_dir                                     | nospam-046700     | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:02 UTC | 28 Oct 24 11:02 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-046700 --log_dir                                     | nospam-046700     | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:02 UTC | 28 Oct 24 11:02 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-046700 --log_dir                                     | nospam-046700     | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:02 UTC | 28 Oct 24 11:02 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-046700 --log_dir                                     | nospam-046700     | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:02 UTC | 28 Oct 24 11:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-046700 --log_dir                                     | nospam-046700     | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-046700 --log_dir                                     | nospam-046700     | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-046700                                            | nospam-046700     | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:03 UTC |
	| start   | -p functional-150200                                        | functional-150200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:03 UTC | 28 Oct 24 11:07 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-150200                                        | functional-150200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:07 UTC | 28 Oct 24 11:09 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-150200 cache add                                 | functional-150200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:09 UTC | 28 Oct 24 11:09 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-150200 cache add                                 | functional-150200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:09 UTC | 28 Oct 24 11:09 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-150200 cache add                                 | functional-150200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:09 UTC | 28 Oct 24 11:10 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-150200 cache add                                 | functional-150200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:10 UTC | 28 Oct 24 11:10 UTC |
	|         | minikube-local-cache-test:functional-150200                 |                   |                   |         |                     |                     |
	| cache   | functional-150200 cache delete                              | functional-150200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:10 UTC | 28 Oct 24 11:10 UTC |
	|         | minikube-local-cache-test:functional-150200                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:10 UTC | 28 Oct 24 11:10 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:10 UTC | 28 Oct 24 11:10 UTC |
	| ssh     | functional-150200 ssh sudo                                  | functional-150200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:10 UTC | 28 Oct 24 11:10 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-150200                                           | functional-150200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:10 UTC | 28 Oct 24 11:10 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-150200 ssh                                       | functional-150200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:10 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-150200 cache reload                              | functional-150200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:10 UTC | 28 Oct 24 11:10 UTC |
	| ssh     | functional-150200 ssh                                       | functional-150200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:10 UTC | 28 Oct 24 11:11 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:11 UTC | 28 Oct 24 11:11 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:11 UTC | 28 Oct 24 11:11 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-150200 kubectl --                                | functional-150200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:11 UTC | 28 Oct 24 11:11 UTC |
	|         | --context functional-150200                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:07:23
	Running on machine: minikube6
	Binary: Built with gc go1.23.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:07:23.855068    7696 out.go:345] Setting OutFile to fd 1512 ...
	I1028 11:07:23.936698    7696 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:07:23.936698    7696 out.go:358] Setting ErrFile to fd 1132...
	I1028 11:07:23.936698    7696 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:07:23.961086    7696 out.go:352] Setting JSON to false
	I1028 11:07:23.965282    7696 start.go:129] hostinfo: {"hostname":"minikube6","uptime":161469,"bootTime":1729952174,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W1028 11:07:23.965397    7696 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 11:07:23.969809    7696 out.go:177] * [functional-150200] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1028 11:07:23.974496    7696 notify.go:220] Checking for updates...
	I1028 11:07:23.974496    7696 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 11:07:23.977688    7696 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:07:23.980709    7696 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I1028 11:07:23.983359    7696 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:07:23.985897    7696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:07:23.989597    7696 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:07:23.989597    7696 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:07:29.743844    7696 out.go:177] * Using the hyperv driver based on existing profile
	I1028 11:07:29.747148    7696 start.go:297] selected driver: hyperv
	I1028 11:07:29.747148    7696 start.go:901] validating driver "hyperv" against &{Name:functional-150200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:functional-150200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.250.220 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:07:29.748019    7696 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:07:29.801877    7696 cni.go:84] Creating CNI manager for ""
	I1028 11:07:29.801877    7696 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 11:07:29.802950    7696 start.go:340] cluster config:
	{Name:functional-150200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-150200 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.250.220 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:07:29.802950    7696 iso.go:125] acquiring lock: {Name:mk92685f18db3b9b8a4c28d2a00efac35439147d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:07:29.807790    7696 out.go:177] * Starting "functional-150200" primary control-plane node in "functional-150200" cluster
	I1028 11:07:29.810936    7696 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 11:07:29.810936    7696 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1028 11:07:29.811304    7696 cache.go:56] Caching tarball of preloaded images
	I1028 11:07:29.811677    7696 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1028 11:07:29.811677    7696 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 11:07:29.811677    7696 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-150200\config.json ...
	I1028 11:07:29.813494    7696 start.go:360] acquireMachinesLock for functional-150200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:07:29.814504    7696 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-150200"
	I1028 11:07:29.814750    7696 start.go:96] Skipping create...Using existing machine configuration
	I1028 11:07:29.814750    7696 fix.go:54] fixHost starting: 
	I1028 11:07:29.815520    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:07:32.635685    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:07:32.635763    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:07:32.635864    7696 fix.go:112] recreateIfNeeded on functional-150200: state=Running err=<nil>
	W1028 11:07:32.635925    7696 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 11:07:32.640762    7696 out.go:177] * Updating the running hyperv "functional-150200" VM ...
	I1028 11:07:32.643484    7696 machine.go:93] provisionDockerMachine start ...
	I1028 11:07:32.643484    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:07:34.941627    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:07:34.941980    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:07:34.942135    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
	I1028 11:07:37.598519    7696 main.go:141] libmachine: [stdout =====>] : 172.27.250.220
	
	I1028 11:07:37.599519    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:07:37.606564    7696 main.go:141] libmachine: Using SSH client type: native
	I1028 11:07:37.607228    7696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.220 22 <nil> <nil>}
	I1028 11:07:37.607761    7696 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 11:07:37.746919    7696 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-150200
	
	I1028 11:07:37.747029    7696 buildroot.go:166] provisioning hostname "functional-150200"
	I1028 11:07:37.747110    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:07:39.977022    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:07:39.977422    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:07:39.977422    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
	I1028 11:07:42.715698    7696 main.go:141] libmachine: [stdout =====>] : 172.27.250.220
	
	I1028 11:07:42.715698    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:07:42.720876    7696 main.go:141] libmachine: Using SSH client type: native
	I1028 11:07:42.722065    7696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.220 22 <nil> <nil>}
	I1028 11:07:42.722065    7696 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-150200 && echo "functional-150200" | sudo tee /etc/hostname
	I1028 11:07:42.884848    7696 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-150200
	
	I1028 11:07:42.884848    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:07:45.182270    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:07:45.182270    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:07:45.182270    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
	I1028 11:07:47.944325    7696 main.go:141] libmachine: [stdout =====>] : 172.27.250.220
	
	I1028 11:07:47.944465    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:07:47.950307    7696 main.go:141] libmachine: Using SSH client type: native
	I1028 11:07:47.950307    7696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.220 22 <nil> <nil>}
	I1028 11:07:47.950307    7696 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-150200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-150200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-150200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:07:48.086606    7696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:07:48.086700    7696 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I1028 11:07:48.086830    7696 buildroot.go:174] setting up certificates
	I1028 11:07:48.086830    7696 provision.go:84] configureAuth start
	I1028 11:07:48.086830    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:07:50.328278    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:07:50.328463    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:07:50.328463    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
	I1028 11:07:53.082167    7696 main.go:141] libmachine: [stdout =====>] : 172.27.250.220
	
	I1028 11:07:53.082167    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:07:53.082167    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:07:55.358209    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:07:55.358209    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:07:55.359247    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
	I1028 11:07:58.056713    7696 main.go:141] libmachine: [stdout =====>] : 172.27.250.220
	
	I1028 11:07:58.056713    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:07:58.057263    7696 provision.go:143] copyHostCerts
	I1028 11:07:58.057488    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I1028 11:07:58.057488    7696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I1028 11:07:58.057488    7696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I1028 11:07:58.058383    7696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I1028 11:07:58.059653    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I1028 11:07:58.059653    7696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I1028 11:07:58.059653    7696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I1028 11:07:58.059653    7696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1028 11:07:58.061370    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I1028 11:07:58.061370    7696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I1028 11:07:58.061370    7696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I1028 11:07:58.062014    7696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1028 11:07:58.062964    7696 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-150200 san=[127.0.0.1 172.27.250.220 functional-150200 localhost minikube]
	I1028 11:07:58.538516    7696 provision.go:177] copyRemoteCerts
	I1028 11:07:58.558209    7696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:07:58.558209    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:08:00.828045    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:08:00.828761    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:00.828761    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
	I1028 11:08:03.535067    7696 main.go:141] libmachine: [stdout =====>] : 172.27.250.220
	
	I1028 11:08:03.535067    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:03.536269    7696 sshutil.go:53] new ssh client: &{IP:172.27.250.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-150200\id_rsa Username:docker}
	I1028 11:08:03.651477    7696 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0932111s)
	I1028 11:08:03.651622    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1028 11:08:03.651717    7696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:08:03.706880    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1028 11:08:03.707498    7696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 11:08:03.762118    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1028 11:08:03.762118    7696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 11:08:03.816241    7696 provision.go:87] duration metric: took 15.7292335s to configureAuth
	I1028 11:08:03.816241    7696 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:08:03.817113    7696 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:08:03.817113    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:08:06.071564    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:08:06.071649    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:06.071649    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
	I1028 11:08:08.814465    7696 main.go:141] libmachine: [stdout =====>] : 172.27.250.220
	
	I1028 11:08:08.815072    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:08.820840    7696 main.go:141] libmachine: Using SSH client type: native
	I1028 11:08:08.821384    7696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.220 22 <nil> <nil>}
	I1028 11:08:08.821384    7696 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 11:08:08.957966    7696 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 11:08:08.958023    7696 buildroot.go:70] root file system type: tmpfs
	I1028 11:08:08.958155    7696 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 11:08:08.958380    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:08:11.221708    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:08:11.221708    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:11.221708    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
	I1028 11:08:13.918776    7696 main.go:141] libmachine: [stdout =====>] : 172.27.250.220
	
	I1028 11:08:13.918776    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:13.924447    7696 main.go:141] libmachine: Using SSH client type: native
	I1028 11:08:13.924447    7696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.220 22 <nil> <nil>}
	I1028 11:08:13.925026    7696 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 11:08:14.102496    7696 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 11:08:14.102496    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:08:16.374624    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:08:16.375308    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:16.375597    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
	I1028 11:08:19.083930    7696 main.go:141] libmachine: [stdout =====>] : 172.27.250.220
	
	I1028 11:08:19.085022    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:19.091270    7696 main.go:141] libmachine: Using SSH client type: native
	I1028 11:08:19.091441    7696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.220 22 <nil> <nil>}
	I1028 11:08:19.091441    7696 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 11:08:19.256006    7696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:08:19.256006    7696 machine.go:96] duration metric: took 46.6119955s to provisionDockerMachine
	I1028 11:08:19.256006    7696 start.go:293] postStartSetup for "functional-150200" (driver="hyperv")
	I1028 11:08:19.256006    7696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:08:19.268241    7696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:08:19.268241    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:08:21.562620    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:08:21.562886    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:21.562886    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
	I1028 11:08:24.287303    7696 main.go:141] libmachine: [stdout =====>] : 172.27.250.220
	
	I1028 11:08:24.287794    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:24.288292    7696 sshutil.go:53] new ssh client: &{IP:172.27.250.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-150200\id_rsa Username:docker}
	I1028 11:08:24.406186    7696 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1378869s)
	I1028 11:08:24.417833    7696 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:08:24.427516    7696 command_runner.go:130] > NAME=Buildroot
	I1028 11:08:24.427516    7696 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1028 11:08:24.427516    7696 command_runner.go:130] > ID=buildroot
	I1028 11:08:24.427516    7696 command_runner.go:130] > VERSION_ID=2023.02.9
	I1028 11:08:24.427516    7696 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1028 11:08:24.427516    7696 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:08:24.427516    7696 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I1028 11:08:24.428053    7696 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I1028 11:08:24.429169    7696 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> 96082.pem in /etc/ssl/certs
	I1028 11:08:24.429315    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /etc/ssl/certs/96082.pem
	I1028 11:08:24.430702    7696 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9608\hosts -> hosts in /etc/test/nested/copy/9608
	I1028 11:08:24.430702    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9608\hosts -> /etc/test/nested/copy/9608/hosts
	I1028 11:08:24.443532    7696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9608
	I1028 11:08:24.473352    7696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /etc/ssl/certs/96082.pem (1708 bytes)
	I1028 11:08:24.534370    7696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9608\hosts --> /etc/test/nested/copy/9608/hosts (40 bytes)
	I1028 11:08:24.617840    7696 start.go:296] duration metric: took 5.361774s for postStartSetup
	I1028 11:08:24.617963    7696 fix.go:56] duration metric: took 54.8025939s for fixHost
	I1028 11:08:24.618085    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:08:26.907648    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:08:26.907648    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:26.907648    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
	I1028 11:08:29.610385    7696 main.go:141] libmachine: [stdout =====>] : 172.27.250.220
	
	I1028 11:08:29.610385    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:29.616087    7696 main.go:141] libmachine: Using SSH client type: native
	I1028 11:08:29.616087    7696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.220 22 <nil> <nil>}
	I1028 11:08:29.616664    7696 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:08:29.761309    7696 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730113709.776010549
	
	I1028 11:08:29.761309    7696 fix.go:216] guest clock: 1730113709.776010549
	I1028 11:08:29.761309    7696 fix.go:229] Guest: 2024-10-28 11:08:29.776010549 +0000 UTC Remote: 2024-10-28 11:08:24.6179638 +0000 UTC m=+60.860256601 (delta=5.158046749s)
	I1028 11:08:29.761309    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:08:32.093768    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:08:32.093865    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:32.094067    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
	I1028 11:08:34.867172    7696 main.go:141] libmachine: [stdout =====>] : 172.27.250.220
	
	I1028 11:08:34.867885    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:34.875177    7696 main.go:141] libmachine: Using SSH client type: native
	I1028 11:08:34.875177    7696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.220 22 <nil> <nil>}
	I1028 11:08:34.875177    7696 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1730113709
	I1028 11:08:35.021133    7696 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 28 11:08:29 UTC 2024
	
	I1028 11:08:35.021212    7696 fix.go:236] clock set: Mon Oct 28 11:08:29 UTC 2024
	 (err=<nil>)
	I1028 11:08:35.021232    7696 start.go:83] releasing machines lock for "functional-150200", held for 1m5.2058084s
	I1028 11:08:35.021301    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:08:37.305724    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:08:37.306720    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:37.306720    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
	I1028 11:08:40.113767    7696 main.go:141] libmachine: [stdout =====>] : 172.27.250.220
	
	I1028 11:08:40.113767    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:40.118202    7696 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1028 11:08:40.118386    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:08:40.127070    7696 ssh_runner.go:195] Run: cat /version.json
	I1028 11:08:40.128040    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:08:42.468221    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:08:42.468221    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:42.468320    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
	I1028 11:08:42.492199    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:08:42.493345    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:42.493345    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
	I1028 11:08:45.334786    7696 main.go:141] libmachine: [stdout =====>] : 172.27.250.220
	
	I1028 11:08:45.335751    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:45.335751    7696 sshutil.go:53] new ssh client: &{IP:172.27.250.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-150200\id_rsa Username:docker}
	I1028 11:08:45.359565    7696 main.go:141] libmachine: [stdout =====>] : 172.27.250.220
	
	I1028 11:08:45.359565    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:08:45.360331    7696 sshutil.go:53] new ssh client: &{IP:172.27.250.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-150200\id_rsa Username:docker}
	I1028 11:08:45.437797    7696 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I1028 11:08:45.438683    7696 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.3203456s)
	W1028 11:08:45.438683    7696 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1028 11:08:45.455864    7696 command_runner.go:130] > {"iso_version": "v1.34.0-1729002252-19806", "kicbase_version": "v0.0.45-1728382586-19774", "minikube_version": "v1.34.0", "commit": "0b046a85be42f4631dd3453091a30d7fc1803a43"}
	I1028 11:08:45.455864    7696 ssh_runner.go:235] Completed: cat /version.json: (5.327764s)
	I1028 11:08:45.469499    7696 ssh_runner.go:195] Run: systemctl --version
	I1028 11:08:45.480740    7696 command_runner.go:130] > systemd 252 (252)
	I1028 11:08:45.481097    7696 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1028 11:08:45.492202    7696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 11:08:45.502115    7696 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1028 11:08:45.502115    7696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:08:45.514415    7696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:08:45.532849    7696 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 11:08:45.532849    7696 start.go:495] detecting cgroup driver to use...
	I1028 11:08:45.532849    7696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1028 11:08:45.553321    7696 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1028 11:08:45.553321    7696 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1028 11:08:45.575068    7696 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1028 11:08:45.587857    7696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1028 11:08:45.618729    7696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 11:08:45.639355    7696 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 11:08:45.650828    7696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 11:08:45.683638    7696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:08:45.727537    7696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 11:08:45.765215    7696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:08:45.799466    7696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:08:45.833629    7696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 11:08:45.866398    7696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 11:08:45.898829    7696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 11:08:45.930327    7696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:08:45.950659    7696 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1028 11:08:45.962291    7696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:08:45.999106    7696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:08:46.286626    7696 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 11:08:46.320526    7696 start.go:495] detecting cgroup driver to use...
	I1028 11:08:46.332840    7696 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 11:08:46.362331    7696 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1028 11:08:46.362406    7696 command_runner.go:130] > [Unit]
	I1028 11:08:46.362406    7696 command_runner.go:130] > Description=Docker Application Container Engine
	I1028 11:08:46.362475    7696 command_runner.go:130] > Documentation=https://docs.docker.com
	I1028 11:08:46.362475    7696 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1028 11:08:46.362475    7696 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1028 11:08:46.362511    7696 command_runner.go:130] > StartLimitBurst=3
	I1028 11:08:46.362511    7696 command_runner.go:130] > StartLimitIntervalSec=60
	I1028 11:08:46.362511    7696 command_runner.go:130] > [Service]
	I1028 11:08:46.362511    7696 command_runner.go:130] > Type=notify
	I1028 11:08:46.362511    7696 command_runner.go:130] > Restart=on-failure
	I1028 11:08:46.362567    7696 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1028 11:08:46.362567    7696 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1028 11:08:46.362567    7696 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1028 11:08:46.362619    7696 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1028 11:08:46.362619    7696 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1028 11:08:46.362646    7696 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1028 11:08:46.362646    7696 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1028 11:08:46.362646    7696 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1028 11:08:46.362646    7696 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1028 11:08:46.362646    7696 command_runner.go:130] > ExecStart=
	I1028 11:08:46.362646    7696 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1028 11:08:46.362646    7696 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1028 11:08:46.362646    7696 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1028 11:08:46.362646    7696 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1028 11:08:46.362646    7696 command_runner.go:130] > LimitNOFILE=infinity
	I1028 11:08:46.362646    7696 command_runner.go:130] > LimitNPROC=infinity
	I1028 11:08:46.362646    7696 command_runner.go:130] > LimitCORE=infinity
	I1028 11:08:46.362646    7696 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1028 11:08:46.362646    7696 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1028 11:08:46.362646    7696 command_runner.go:130] > TasksMax=infinity
	I1028 11:08:46.362646    7696 command_runner.go:130] > TimeoutStartSec=0
	I1028 11:08:46.362646    7696 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1028 11:08:46.362646    7696 command_runner.go:130] > Delegate=yes
	I1028 11:08:46.362646    7696 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1028 11:08:46.362646    7696 command_runner.go:130] > KillMode=process
	I1028 11:08:46.362646    7696 command_runner.go:130] > [Install]
	I1028 11:08:46.362646    7696 command_runner.go:130] > WantedBy=multi-user.target
	I1028 11:08:46.375690    7696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:08:46.420328    7696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:08:46.477028    7696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:08:46.515183    7696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 11:08:46.540066    7696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:08:46.577423    7696 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1028 11:08:46.593297    7696 ssh_runner.go:195] Run: which cri-dockerd
	I1028 11:08:46.598468    7696 command_runner.go:130] > /usr/bin/cri-dockerd
	I1028 11:08:46.609619    7696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 11:08:46.630621    7696 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1028 11:08:46.683960    7696 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 11:08:46.971948    7696 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 11:08:47.234290    7696 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 11:08:47.234523    7696 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 11:08:47.281215    7696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:08:47.562080    7696 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 11:09:00.632741    7696 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.0705141s)
	I1028 11:09:00.645884    7696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 11:09:00.684291    7696 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1028 11:09:00.742678    7696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 11:09:00.778511    7696 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 11:09:01.004794    7696 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 11:09:01.224823    7696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:09:01.449819    7696 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 11:09:01.502492    7696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 11:09:01.539629    7696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:09:01.765859    7696 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 11:09:01.906668    7696 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 11:09:01.918277    7696 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 11:09:01.928043    7696 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1028 11:09:01.928138    7696 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1028 11:09:01.928138    7696 command_runner.go:130] > Device: 0,22	Inode: 1411        Links: 1
	I1028 11:09:01.928138    7696 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1028 11:09:01.928138    7696 command_runner.go:130] > Access: 2024-10-28 11:09:01.806474355 +0000
	I1028 11:09:01.928207    7696 command_runner.go:130] > Modify: 2024-10-28 11:09:01.806474355 +0000
	I1028 11:09:01.928207    7696 command_runner.go:130] > Change: 2024-10-28 11:09:01.810474362 +0000
	I1028 11:09:01.928207    7696 command_runner.go:130] >  Birth: -
	I1028 11:09:01.928284    7696 start.go:563] Will wait 60s for crictl version
	I1028 11:09:01.939921    7696 ssh_runner.go:195] Run: which crictl
	I1028 11:09:01.946256    7696 command_runner.go:130] > /usr/bin/crictl
	I1028 11:09:01.957255    7696 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:09:02.014964    7696 command_runner.go:130] > Version:  0.1.0
	I1028 11:09:02.014964    7696 command_runner.go:130] > RuntimeName:  docker
	I1028 11:09:02.014964    7696 command_runner.go:130] > RuntimeVersion:  27.3.1
	I1028 11:09:02.014964    7696 command_runner.go:130] > RuntimeApiVersion:  v1
	I1028 11:09:02.016252    7696 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1028 11:09:02.025908    7696 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 11:09:02.063439    7696 command_runner.go:130] > 27.3.1
	I1028 11:09:02.074893    7696 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 11:09:02.113881    7696 command_runner.go:130] > 27.3.1
	I1028 11:09:02.118066    7696 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1028 11:09:02.118315    7696 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1028 11:09:02.125648    7696 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1028 11:09:02.125648    7696 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1028 11:09:02.125648    7696 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1028 11:09:02.125648    7696 ip.go:211] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:23:7f:d2 Flags:up|broadcast|multicast|running}
	I1028 11:09:02.128500    7696 ip.go:214] interface addr: fe80::866e:dfb9:193a:741e/64
	I1028 11:09:02.129508    7696 ip.go:214] interface addr: 172.27.240.1/20
	I1028 11:09:02.140816    7696 ssh_runner.go:195] Run: grep 172.27.240.1	host.minikube.internal$ /etc/hosts
	I1028 11:09:02.153412    7696 command_runner.go:130] > 172.27.240.1	host.minikube.internal
	I1028 11:09:02.153412    7696 kubeadm.go:883] updating cluster {Name:functional-150200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.31.2 ClusterName:functional-150200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.250.220 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:09:02.153412    7696 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 11:09:02.163959    7696 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 11:09:02.196985    7696 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.2
	I1028 11:09:02.197061    7696 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.2
	I1028 11:09:02.197061    7696 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 11:09:02.197061    7696 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.2
	I1028 11:09:02.197061    7696 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I1028 11:09:02.197061    7696 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I1028 11:09:02.197061    7696 command_runner.go:130] > registry.k8s.io/pause:3.10
	I1028 11:09:02.197061    7696 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:09:02.197213    7696 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 11:09:02.197213    7696 docker.go:619] Images already preloaded, skipping extraction
	I1028 11:09:02.207346    7696 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 11:09:02.237101    7696 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.2
	I1028 11:09:02.237195    7696 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 11:09:02.237195    7696 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.2
	I1028 11:09:02.237195    7696 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.2
	I1028 11:09:02.237195    7696 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I1028 11:09:02.237195    7696 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I1028 11:09:02.237195    7696 command_runner.go:130] > registry.k8s.io/pause:3.10
	I1028 11:09:02.237195    7696 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:09:02.237331    7696 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 11:09:02.237379    7696 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:09:02.237472    7696 kubeadm.go:934] updating node { 172.27.250.220 8441 v1.31.2 docker true true} ...
	I1028 11:09:02.237598    7696 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-150200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.250.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:functional-150200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:09:02.247655    7696 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1028 11:09:02.314547    7696 command_runner.go:130] > cgroupfs
	I1028 11:09:02.315000    7696 cni.go:84] Creating CNI manager for ""
	I1028 11:09:02.315063    7696 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 11:09:02.315063    7696 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:09:02.315063    7696 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.250.220 APIServerPort:8441 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-150200 NodeName:functional-150200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.250.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.250.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:09:02.315063    7696 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.250.220
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-150200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.27.250.220"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.250.220"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:09:02.328459    7696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:09:02.347400    7696 command_runner.go:130] > kubeadm
	I1028 11:09:02.347400    7696 command_runner.go:130] > kubectl
	I1028 11:09:02.347400    7696 command_runner.go:130] > kubelet
	I1028 11:09:02.347400    7696 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:09:02.359010    7696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 11:09:02.378599    7696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1028 11:09:02.412578    7696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:09:02.450488    7696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2301 bytes)
	I1028 11:09:02.497604    7696 ssh_runner.go:195] Run: grep 172.27.250.220	control-plane.minikube.internal$ /etc/hosts
	I1028 11:09:02.505230    7696 command_runner.go:130] > 172.27.250.220	control-plane.minikube.internal
	I1028 11:09:02.517035    7696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:09:02.739488    7696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:09:02.766277    7696 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-150200 for IP: 172.27.250.220
	I1028 11:09:02.766277    7696 certs.go:194] generating shared ca certs ...
	I1028 11:09:02.766277    7696 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:09:02.767181    7696 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I1028 11:09:02.768251    7696 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I1028 11:09:02.768456    7696 certs.go:256] generating profile certs ...
	I1028 11:09:02.769238    7696 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-150200\client.key
	I1028 11:09:02.769622    7696 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-150200\apiserver.key.cf786181
	I1028 11:09:02.769835    7696 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-150200\proxy-client.key
	I1028 11:09:02.769835    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:09:02.770171    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:09:02.770401    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:09:02.770622    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:09:02.770774    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-150200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:09:02.771063    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-150200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:09:02.771210    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-150200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:09:02.771210    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-150200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:09:02.771210    7696 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem (1338 bytes)
	W1028 11:09:02.771210    7696 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608_empty.pem, impossibly tiny 0 bytes
	I1028 11:09:02.772549    7696 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1028 11:09:02.772714    7696 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1028 11:09:02.772714    7696 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1028 11:09:02.773291    7696 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1028 11:09:02.773980    7696 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem (1708 bytes)
	I1028 11:09:02.774021    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:09:02.774021    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem -> /usr/share/ca-certificates/9608.pem
	I1028 11:09:02.774021    7696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /usr/share/ca-certificates/96082.pem
	I1028 11:09:02.775830    7696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:09:02.854010    7696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:09:02.949455    7696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:09:03.028292    7696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:09:03.086327    7696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-150200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 11:09:03.140102    7696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-150200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 11:09:03.208467    7696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-150200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:09:03.269898    7696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-150200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 11:09:03.346201    7696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:09:03.442561    7696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem --> /usr/share/ca-certificates/9608.pem (1338 bytes)
	I1028 11:09:03.517533    7696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /usr/share/ca-certificates/96082.pem (1708 bytes)
	I1028 11:09:03.590048    7696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:09:03.672617    7696 ssh_runner.go:195] Run: openssl version
	I1028 11:09:03.683426    7696 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1028 11:09:03.696139    7696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:09:03.749934    7696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:09:03.757460    7696 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 28 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:09:03.757460    7696 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:09:03.771297    7696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:09:03.790273    7696 command_runner.go:130] > b5213941
	I1028 11:09:03.802802    7696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:09:03.891742    7696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9608.pem && ln -fs /usr/share/ca-certificates/9608.pem /etc/ssl/certs/9608.pem"
	I1028 11:09:03.945478    7696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9608.pem
	I1028 11:09:03.953052    7696 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 28 11:03 /usr/share/ca-certificates/9608.pem
	I1028 11:09:03.953052    7696 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:03 /usr/share/ca-certificates/9608.pem
	I1028 11:09:03.965643    7696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9608.pem
	I1028 11:09:03.980301    7696 command_runner.go:130] > 51391683
	I1028 11:09:03.992075    7696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9608.pem /etc/ssl/certs/51391683.0"
	I1028 11:09:04.034685    7696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96082.pem && ln -fs /usr/share/ca-certificates/96082.pem /etc/ssl/certs/96082.pem"
	I1028 11:09:04.071125    7696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96082.pem
	I1028 11:09:04.083568    7696 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 28 11:03 /usr/share/ca-certificates/96082.pem
	I1028 11:09:04.083964    7696 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:03 /usr/share/ca-certificates/96082.pem
	I1028 11:09:04.096639    7696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96082.pem
	I1028 11:09:04.108728    7696 command_runner.go:130] > 3ec20f2e
	I1028 11:09:04.121733    7696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96082.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:09:04.177642    7696 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:09:04.204696    7696 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:09:04.204756    7696 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1028 11:09:04.204756    7696 command_runner.go:130] > Device: 8,1	Inode: 2101603     Links: 1
	I1028 11:09:04.204818    7696 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1028 11:09:04.204818    7696 command_runner.go:130] > Access: 2024-10-28 11:06:56.701786968 +0000
	I1028 11:09:04.204818    7696 command_runner.go:130] > Modify: 2024-10-28 11:06:56.701786968 +0000
	I1028 11:09:04.204818    7696 command_runner.go:130] > Change: 2024-10-28 11:06:56.701786968 +0000
	I1028 11:09:04.204818    7696 command_runner.go:130] >  Birth: 2024-10-28 11:06:56.701786968 +0000
	I1028 11:09:04.217333    7696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 11:09:04.228489    7696 command_runner.go:130] > Certificate will not expire
	I1028 11:09:04.240806    7696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 11:09:04.249622    7696 command_runner.go:130] > Certificate will not expire
	I1028 11:09:04.261550    7696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 11:09:04.270633    7696 command_runner.go:130] > Certificate will not expire
	I1028 11:09:04.284497    7696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 11:09:04.292596    7696 command_runner.go:130] > Certificate will not expire
	I1028 11:09:04.306412    7696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 11:09:04.319936    7696 command_runner.go:130] > Certificate will not expire
	I1028 11:09:04.334251    7696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 11:09:04.352490    7696 command_runner.go:130] > Certificate will not expire
	I1028 11:09:04.352490    7696 kubeadm.go:392] StartCluster: {Name:functional-150200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:functional-150200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.250.220 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:09:04.363368    7696 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 11:09:04.443631    7696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 11:09:04.484650    7696 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1028 11:09:04.485656    7696 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1028 11:09:04.485656    7696 command_runner.go:130] > /var/lib/minikube/etcd:
	I1028 11:09:04.485656    7696 command_runner.go:130] > member
	I1028 11:09:04.485927    7696 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 11:09:04.486040    7696 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 11:09:04.498647    7696 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 11:09:04.528048    7696 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 11:09:04.530120    7696 kubeconfig.go:125] found "functional-150200" server: "https://172.27.250.220:8441"
	I1028 11:09:04.531387    7696 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 11:09:04.532336    7696 kapi.go:59] client config for functional-150200: &rest.Config{Host:"https://172.27.250.220:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29cb3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 11:09:04.534133    7696 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 11:09:04.544952    7696 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 11:09:04.582562    7696 kubeadm.go:630] The running cluster does not require reconfiguration: 172.27.250.220
	I1028 11:09:04.582658    7696 kubeadm.go:1160] stopping kube-system containers ...
	I1028 11:09:04.592078    7696 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 11:09:04.675163    7696 command_runner.go:130] > 3885df285142
	I1028 11:09:04.675163    7696 command_runner.go:130] > 353c83554b86
	I1028 11:09:04.675163    7696 command_runner.go:130] > 9338df5b8f68
	I1028 11:09:04.675163    7696 command_runner.go:130] > 64f1dcb75042
	I1028 11:09:04.675163    7696 command_runner.go:130] > a6838367fab3
	I1028 11:09:04.675163    7696 command_runner.go:130] > 155a86f12ee2
	I1028 11:09:04.675163    7696 command_runner.go:130] > 001506fa6797
	I1028 11:09:04.675163    7696 command_runner.go:130] > 48bb0201c2a9
	I1028 11:09:04.675163    7696 command_runner.go:130] > 6b0203449194
	I1028 11:09:04.675163    7696 command_runner.go:130] > ae70347c0705
	I1028 11:09:04.675163    7696 command_runner.go:130] > 3acaea8ee08e
	I1028 11:09:04.675163    7696 command_runner.go:130] > fe3f13a4911f
	I1028 11:09:04.675163    7696 command_runner.go:130] > 338d161c4f31
	I1028 11:09:04.675163    7696 command_runner.go:130] > c10fc2a26deb
	I1028 11:09:04.675163    7696 command_runner.go:130] > 0059db91b7b1
	I1028 11:09:04.675163    7696 command_runner.go:130] > 8327683de18c
	I1028 11:09:04.675163    7696 command_runner.go:130] > ce9547f92516
	I1028 11:09:04.675163    7696 command_runner.go:130] > ee5398bee11c
	I1028 11:09:04.675163    7696 command_runner.go:130] > bb59e0a81ac9
	I1028 11:09:04.675163    7696 command_runner.go:130] > 4d1ba26bd28c
	I1028 11:09:04.675163    7696 command_runner.go:130] > bcc12f86e67e
	I1028 11:09:04.675163    7696 command_runner.go:130] > 9a20dd54b3ca
	I1028 11:09:04.675163    7696 command_runner.go:130] > 8563f8cf9020
	I1028 11:09:04.675163    7696 command_runner.go:130] > 967bbb1f04e3
	I1028 11:09:04.675163    7696 docker.go:483] Stopping containers: [3885df285142 353c83554b86 9338df5b8f68 64f1dcb75042 a6838367fab3 155a86f12ee2 001506fa6797 48bb0201c2a9 6b0203449194 ae70347c0705 3acaea8ee08e fe3f13a4911f 338d161c4f31 c10fc2a26deb 0059db91b7b1 8327683de18c ce9547f92516 ee5398bee11c bb59e0a81ac9 4d1ba26bd28c bcc12f86e67e 9a20dd54b3ca 8563f8cf9020 967bbb1f04e3]
	I1028 11:09:04.687413    7696 ssh_runner.go:195] Run: docker stop 3885df285142 353c83554b86 9338df5b8f68 64f1dcb75042 a6838367fab3 155a86f12ee2 001506fa6797 48bb0201c2a9 6b0203449194 ae70347c0705 3acaea8ee08e fe3f13a4911f 338d161c4f31 c10fc2a26deb 0059db91b7b1 8327683de18c ce9547f92516 ee5398bee11c bb59e0a81ac9 4d1ba26bd28c bcc12f86e67e 9a20dd54b3ca 8563f8cf9020 967bbb1f04e3
	I1028 11:09:06.884998    7696 command_runner.go:130] > 3885df285142
	I1028 11:09:06.885068    7696 command_runner.go:130] > 353c83554b86
	I1028 11:09:06.885068    7696 command_runner.go:130] > 9338df5b8f68
	I1028 11:09:06.885068    7696 command_runner.go:130] > 64f1dcb75042
	I1028 11:09:06.885068    7696 command_runner.go:130] > a6838367fab3
	I1028 11:09:06.885174    7696 command_runner.go:130] > 155a86f12ee2
	I1028 11:09:06.885174    7696 command_runner.go:130] > 001506fa6797
	I1028 11:09:06.885174    7696 command_runner.go:130] > 48bb0201c2a9
	I1028 11:09:06.885174    7696 command_runner.go:130] > 6b0203449194
	I1028 11:09:06.885174    7696 command_runner.go:130] > ae70347c0705
	I1028 11:09:06.885174    7696 command_runner.go:130] > 3acaea8ee08e
	I1028 11:09:06.885174    7696 command_runner.go:130] > fe3f13a4911f
	I1028 11:09:06.885234    7696 command_runner.go:130] > 338d161c4f31
	I1028 11:09:06.885234    7696 command_runner.go:130] > c10fc2a26deb
	I1028 11:09:06.885234    7696 command_runner.go:130] > 0059db91b7b1
	I1028 11:09:06.885234    7696 command_runner.go:130] > 8327683de18c
	I1028 11:09:06.885234    7696 command_runner.go:130] > ce9547f92516
	I1028 11:09:06.885234    7696 command_runner.go:130] > ee5398bee11c
	I1028 11:09:06.885292    7696 command_runner.go:130] > bb59e0a81ac9
	I1028 11:09:06.885292    7696 command_runner.go:130] > 4d1ba26bd28c
	I1028 11:09:06.885316    7696 command_runner.go:130] > bcc12f86e67e
	I1028 11:09:06.885316    7696 command_runner.go:130] > 9a20dd54b3ca
	I1028 11:09:06.885316    7696 command_runner.go:130] > 8563f8cf9020
	I1028 11:09:06.885316    7696 command_runner.go:130] > 967bbb1f04e3
	I1028 11:09:06.886753    7696 ssh_runner.go:235] Completed: docker stop 3885df285142 353c83554b86 9338df5b8f68 64f1dcb75042 a6838367fab3 155a86f12ee2 001506fa6797 48bb0201c2a9 6b0203449194 ae70347c0705 3acaea8ee08e fe3f13a4911f 338d161c4f31 c10fc2a26deb 0059db91b7b1 8327683de18c ce9547f92516 ee5398bee11c bb59e0a81ac9 4d1ba26bd28c bcc12f86e67e 9a20dd54b3ca 8563f8cf9020 967bbb1f04e3: (2.1993145s)
	I1028 11:09:06.898762    7696 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 11:09:06.980781    7696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 11:09:07.003097    7696 command_runner.go:130] > -rw------- 1 root root 5647 Oct 28 11:06 /etc/kubernetes/admin.conf
	I1028 11:09:07.003245    7696 command_runner.go:130] > -rw------- 1 root root 5654 Oct 28 11:07 /etc/kubernetes/controller-manager.conf
	I1028 11:09:07.003245    7696 command_runner.go:130] > -rw------- 1 root root 2007 Oct 28 11:07 /etc/kubernetes/kubelet.conf
	I1028 11:09:07.003245    7696 command_runner.go:130] > -rw------- 1 root root 5606 Oct 28 11:07 /etc/kubernetes/scheduler.conf
	I1028 11:09:07.003245    7696 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Oct 28 11:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Oct 28 11:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Oct 28 11:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Oct 28 11:07 /etc/kubernetes/scheduler.conf
	
	I1028 11:09:07.013907    7696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1028 11:09:07.034164    7696 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I1028 11:09:07.049747    7696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1028 11:09:07.070959    7696 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I1028 11:09:07.082694    7696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1028 11:09:07.104503    7696 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1028 11:09:07.115209    7696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 11:09:07.151890    7696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1028 11:09:07.172855    7696 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1028 11:09:07.185845    7696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 11:09:07.219850    7696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 11:09:07.237909    7696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 11:09:07.357475    7696 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 11:09:07.357607    7696 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1028 11:09:07.357607    7696 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1028 11:09:07.357679    7696 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 11:09:07.357679    7696 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1028 11:09:07.357679    7696 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1028 11:09:07.357679    7696 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1028 11:09:07.357679    7696 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1028 11:09:07.357746    7696 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1028 11:09:07.357766    7696 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 11:09:07.357766    7696 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 11:09:07.357766    7696 command_runner.go:130] > [certs] Using the existing "sa" key
	I1028 11:09:07.357834    7696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 11:09:08.471201    7696 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 11:09:08.471793    7696 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I1028 11:09:08.471793    7696 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I1028 11:09:08.471793    7696 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I1028 11:09:08.471793    7696 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 11:09:08.471793    7696 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 11:09:08.471793    7696 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.113927s)
	I1028 11:09:08.471916    7696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 11:09:08.826762    7696 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 11:09:08.826762    7696 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 11:09:08.826762    7696 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1028 11:09:08.826933    7696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 11:09:08.914358    7696 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 11:09:08.914751    7696 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 11:09:08.918436    7696 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 11:09:08.921158    7696 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 11:09:08.921466    7696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 11:09:09.018331    7696 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 11:09:09.018360    7696 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:09:09.030370    7696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:09:09.530707    7696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:09:10.028555    7696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:09:10.533252    7696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:09:10.563805    7696 command_runner.go:130] > 5703
	I1028 11:09:10.563907    7696 api_server.go:72] duration metric: took 1.5454274s to wait for apiserver process to appear ...
	I1028 11:09:10.563907    7696 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:09:10.563986    7696 api_server.go:253] Checking apiserver healthz at https://172.27.250.220:8441/healthz ...
	I1028 11:09:13.542658    7696 api_server.go:279] https://172.27.250.220:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 11:09:13.542759    7696 api_server.go:103] status: https://172.27.250.220:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 11:09:13.542938    7696 api_server.go:253] Checking apiserver healthz at https://172.27.250.220:8441/healthz ...
	I1028 11:09:13.606395    7696 api_server.go:279] https://172.27.250.220:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 11:09:13.606395    7696 api_server.go:103] status: https://172.27.250.220:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 11:09:13.606395    7696 api_server.go:253] Checking apiserver healthz at https://172.27.250.220:8441/healthz ...
	I1028 11:09:13.651557    7696 api_server.go:279] https://172.27.250.220:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 11:09:13.651634    7696 api_server.go:103] status: https://172.27.250.220:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 11:09:14.064279    7696 api_server.go:253] Checking apiserver healthz at https://172.27.250.220:8441/healthz ...
	I1028 11:09:14.075208    7696 api_server.go:279] https://172.27.250.220:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 11:09:14.075251    7696 api_server.go:103] status: https://172.27.250.220:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 11:09:14.565178    7696 api_server.go:253] Checking apiserver healthz at https://172.27.250.220:8441/healthz ...
	I1028 11:09:14.574000    7696 api_server.go:279] https://172.27.250.220:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 11:09:14.574503    7696 api_server.go:103] status: https://172.27.250.220:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 11:09:15.064653    7696 api_server.go:253] Checking apiserver healthz at https://172.27.250.220:8441/healthz ...
	I1028 11:09:15.084250    7696 api_server.go:279] https://172.27.250.220:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 11:09:15.084316    7696 api_server.go:103] status: https://172.27.250.220:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 11:09:15.564251    7696 api_server.go:253] Checking apiserver healthz at https://172.27.250.220:8441/healthz ...
	I1028 11:09:15.572946    7696 api_server.go:279] https://172.27.250.220:8441/healthz returned 200:
	ok
	I1028 11:09:15.573246    7696 round_trippers.go:463] GET https://172.27.250.220:8441/version
	I1028 11:09:15.573305    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:15.573364    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:15.573392    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:15.589038    7696 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1028 11:09:15.589038    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:15.589038    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:15.589038    7696 round_trippers.go:580]     Content-Length: 263
	I1028 11:09:15.589038    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:15 GMT
	I1028 11:09:15.589038    7696 round_trippers.go:580]     Audit-Id: a601dce0-cb97-44c5-8d4e-3f601d237779
	I1028 11:09:15.589038    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:15.589038    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:15.589038    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:15.589038    7696 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.2",
	  "gitCommit": "5864a4677267e6adeae276ad85882a8714d69d9d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-10-22T20:28:14Z",
	  "goVersion": "go1.22.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1028 11:09:15.589038    7696 api_server.go:141] control plane version: v1.31.2
	I1028 11:09:15.589038    7696 api_server.go:131] duration metric: took 5.0250735s to wait for apiserver health ...
	I1028 11:09:15.589038    7696 cni.go:84] Creating CNI manager for ""
	I1028 11:09:15.589038    7696 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 11:09:15.592035    7696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 11:09:15.606053    7696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 11:09:15.626582    7696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 11:09:15.663670    7696 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:09:15.663670    7696 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 11:09:15.664037    7696 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 11:09:15.664037    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods
	I1028 11:09:15.664037    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:15.664037    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:15.664037    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:15.680011    7696 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1028 11:09:15.680011    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:15.680011    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:15.680011    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:15 GMT
	I1028 11:09:15.680011    7696 round_trippers.go:580]     Audit-Id: 02475a7d-50a0-443a-895d-fc65bf015005
	I1028 11:09:15.680011    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:15.680011    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:15.680011    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:15.682028    7696 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"555"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-bbbsr","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2c1be340-9d91-4d11-b776-a17e2a7409d0","resourceVersion":"524","creationTimestamp":"2024-10-28T11:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"97488db9-ee6b-451d-bea8-cb0994714fd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97488db9-ee6b-451d-bea8-cb0994714fd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52808 chars]
	I1028 11:09:15.687023    7696 system_pods.go:59] 7 kube-system pods found
	I1028 11:09:15.687023    7696 system_pods.go:61] "coredns-7c65d6cfc9-bbbsr" [2c1be340-9d91-4d11-b776-a17e2a7409d0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 11:09:15.687023    7696 system_pods.go:61] "etcd-functional-150200" [deeea244-f2b0-4060-b0c7-c882b8edf88d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 11:09:15.687023    7696 system_pods.go:61] "kube-apiserver-functional-150200" [91d76a57-02b8-416d-a13d-8d1b3d78c0ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 11:09:15.687023    7696 system_pods.go:61] "kube-controller-manager-functional-150200" [74b7db98-fcff-4451-b704-f889d93fec74] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 11:09:15.687023    7696 system_pods.go:61] "kube-proxy-99k8l" [77fef842-5652-4270-ac9e-53d0bc432778] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 11:09:15.687023    7696 system_pods.go:61] "kube-scheduler-functional-150200" [e86b3da4-c60d-4a99-8fa9-47e9c5a18934] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 11:09:15.687023    7696 system_pods.go:61] "storage-provisioner" [11f21928-6ded-4c06-ba52-2f346f9fb8b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 11:09:15.687023    7696 system_pods.go:74] duration metric: took 23.3524ms to wait for pod list to return data ...
	I1028 11:09:15.687023    7696 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:09:15.687023    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes
	I1028 11:09:15.687023    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:15.687023    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:15.687023    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:15.697028    7696 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1028 11:09:15.697028    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:15.697028    7696 round_trippers.go:580]     Audit-Id: 88ac4c15-3db4-4de1-a2dd-d60373954982
	I1028 11:09:15.697028    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:15.697028    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:15.697028    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:15.697028    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:15.697028    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:15 GMT
	I1028 11:09:15.698089    7696 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"555"},"items":[{"metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I1028 11:09:15.698089    7696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:09:15.698089    7696 node_conditions.go:123] node cpu capacity is 2
	I1028 11:09:15.698089    7696 node_conditions.go:105] duration metric: took 11.0661ms to run NodePressure ...
	I1028 11:09:15.698089    7696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 11:09:16.187288    7696 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1028 11:09:16.187288    7696 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1028 11:09:16.187288    7696 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 11:09:16.187288    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I1028 11:09:16.187288    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:16.187288    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:16.187288    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:16.191038    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:16.191038    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:16.191038    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:16.191038    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:16.191038    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:16 GMT
	I1028 11:09:16.191038    7696 round_trippers.go:580]     Audit-Id: 09ebd930-75e5-43c5-a3f9-6b0459b09f7b
	I1028 11:09:16.191038    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:16.191038    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:16.192019    7696 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"560"},"items":[{"metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 31288 chars]
	I1028 11:09:16.194033    7696 kubeadm.go:739] kubelet initialised
	I1028 11:09:16.194033    7696 kubeadm.go:740] duration metric: took 6.7451ms waiting for restarted kubelet to initialise ...
	I1028 11:09:16.194033    7696 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:09:16.194033    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods
	I1028 11:09:16.194033    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:16.194033    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:16.194033    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:16.199476    7696 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:09:16.199476    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:16.199476    7696 round_trippers.go:580]     Audit-Id: 2117e065-d46a-4336-8474-c7d3a48330f2
	I1028 11:09:16.199476    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:16.199476    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:16.199476    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:16.199476    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:16.199476    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:16 GMT
	I1028 11:09:16.200870    7696 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"560"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-bbbsr","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2c1be340-9d91-4d11-b776-a17e2a7409d0","resourceVersion":"557","creationTimestamp":"2024-10-28T11:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"97488db9-ee6b-451d-bea8-cb0994714fd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97488db9-ee6b-451d-bea8-cb0994714fd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52453 chars]
	I1028 11:09:16.203922    7696 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-bbbsr" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:16.204124    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bbbsr
	I1028 11:09:16.204124    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:16.204193    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:16.204193    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:16.206835    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:16.207614    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:16.207614    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:16.207614    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:16.207614    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:16 GMT
	I1028 11:09:16.207614    7696 round_trippers.go:580]     Audit-Id: 2c871ec0-214e-431a-ac58-ab15d0edd40f
	I1028 11:09:16.207689    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:16.207689    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:16.208018    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-bbbsr","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2c1be340-9d91-4d11-b776-a17e2a7409d0","resourceVersion":"557","creationTimestamp":"2024-10-28T11:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"97488db9-ee6b-451d-bea8-cb0994714fd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97488db9-ee6b-451d-bea8-cb0994714fd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6933 chars]
	I1028 11:09:16.208700    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:16.208700    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:16.208700    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:16.208700    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:16.211606    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:16.211606    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:16.211606    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:16 GMT
	I1028 11:09:16.211606    7696 round_trippers.go:580]     Audit-Id: bbb5483c-fd69-4410-9436-0d345a1b3939
	I1028 11:09:16.211606    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:16.211606    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:16.211606    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:16.211606    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:16.211606    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:16.704179    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bbbsr
	I1028 11:09:16.704179    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:16.704179    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:16.704179    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:16.708460    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:16.708533    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:16.708533    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:16.708533    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:16 GMT
	I1028 11:09:16.708533    7696 round_trippers.go:580]     Audit-Id: 3be2afbd-1f61-4648-b56e-e4c3dd680d9c
	I1028 11:09:16.708598    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:16.708598    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:16.708598    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:16.708795    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-bbbsr","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2c1be340-9d91-4d11-b776-a17e2a7409d0","resourceVersion":"557","creationTimestamp":"2024-10-28T11:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"97488db9-ee6b-451d-bea8-cb0994714fd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97488db9-ee6b-451d-bea8-cb0994714fd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6933 chars]
	I1028 11:09:16.709906    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:16.709906    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:16.709906    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:16.709906    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:16.715435    7696 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:09:16.715551    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:16.715551    7696 round_trippers.go:580]     Audit-Id: 773a17ad-34ad-4cf7-ba9e-236609d50903
	I1028 11:09:16.715551    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:16.715551    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:16.715551    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:16.715551    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:16.715551    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:16 GMT
	I1028 11:09:16.715551    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:17.204309    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bbbsr
	I1028 11:09:17.204309    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:17.204309    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:17.204309    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:17.208263    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:17.208389    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:17.208469    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:17.208469    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:17.208469    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:17 GMT
	I1028 11:09:17.208469    7696 round_trippers.go:580]     Audit-Id: c06f1b43-a912-4635-b55f-50126578b351
	I1028 11:09:17.208530    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:17.208530    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:17.208751    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-bbbsr","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2c1be340-9d91-4d11-b776-a17e2a7409d0","resourceVersion":"557","creationTimestamp":"2024-10-28T11:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"97488db9-ee6b-451d-bea8-cb0994714fd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97488db9-ee6b-451d-bea8-cb0994714fd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6933 chars]
	I1028 11:09:17.209875    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:17.209875    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:17.209875    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:17.209875    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:17.212909    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:17.212909    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:17.212909    7696 round_trippers.go:580]     Audit-Id: deba27c4-b8ab-4ae7-aa88-c62ae873576d
	I1028 11:09:17.212909    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:17.212909    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:17.212909    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:17.212909    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:17.212909    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:17 GMT
	I1028 11:09:17.212909    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:17.705184    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bbbsr
	I1028 11:09:17.705184    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:17.705184    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:17.705184    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:17.709963    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:17.709963    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:17.709963    7696 round_trippers.go:580]     Audit-Id: 4a427820-d8fa-4ed8-a0b6-15de085ea7f1
	I1028 11:09:17.710033    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:17.710033    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:17.710033    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:17.710033    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:17.710033    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:17 GMT
	I1028 11:09:17.710242    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-bbbsr","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2c1be340-9d91-4d11-b776-a17e2a7409d0","resourceVersion":"557","creationTimestamp":"2024-10-28T11:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"97488db9-ee6b-451d-bea8-cb0994714fd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97488db9-ee6b-451d-bea8-cb0994714fd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6933 chars]
	I1028 11:09:17.711130    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:17.711187    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:17.711187    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:17.711187    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:17.714771    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:17.715046    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:17.715046    7696 round_trippers.go:580]     Audit-Id: 711df57b-f06a-48cd-b003-a8a9ffd6ee5a
	I1028 11:09:17.715046    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:17.715046    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:17.715046    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:17.715046    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:17.715046    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:17 GMT
	I1028 11:09:17.715402    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:18.204862    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bbbsr
	I1028 11:09:18.204862    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:18.204862    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:18.204862    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:18.209916    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:18.209916    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:18.209987    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:18.209987    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:18 GMT
	I1028 11:09:18.209987    7696 round_trippers.go:580]     Audit-Id: 831b4768-cfc5-48be-be2e-608daa5b3989
	I1028 11:09:18.209987    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:18.209987    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:18.209987    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:18.210409    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-bbbsr","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2c1be340-9d91-4d11-b776-a17e2a7409d0","resourceVersion":"557","creationTimestamp":"2024-10-28T11:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"97488db9-ee6b-451d-bea8-cb0994714fd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97488db9-ee6b-451d-bea8-cb0994714fd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6933 chars]
	I1028 11:09:18.211316    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:18.211394    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:18.211394    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:18.211394    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:18.214230    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:18.214479    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:18.214479    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:18 GMT
	I1028 11:09:18.214479    7696 round_trippers.go:580]     Audit-Id: 784a5b09-cc68-442d-99dd-dbf8693b0b91
	I1028 11:09:18.214479    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:18.214532    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:18.214532    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:18.214532    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:18.214532    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:18.215316    7696 pod_ready.go:103] pod "coredns-7c65d6cfc9-bbbsr" in "kube-system" namespace has status "Ready":"False"
	I1028 11:09:18.705414    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bbbsr
	I1028 11:09:18.705414    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:18.705414    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:18.705414    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:18.711082    7696 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:09:18.711184    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:18.711184    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:18.711184    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:18.711184    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:18.711230    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:18.711230    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:18 GMT
	I1028 11:09:18.711230    7696 round_trippers.go:580]     Audit-Id: 2dbd13ff-588d-4052-b49b-89b51b201747
	I1028 11:09:18.711363    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-bbbsr","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2c1be340-9d91-4d11-b776-a17e2a7409d0","resourceVersion":"568","creationTimestamp":"2024-10-28T11:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"97488db9-ee6b-451d-bea8-cb0994714fd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97488db9-ee6b-451d-bea8-cb0994714fd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6704 chars]
	I1028 11:09:18.712488    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:18.712543    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:18.712543    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:18.712543    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:18.715749    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:18.715749    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:18.715749    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:18.715749    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:18.715749    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:18.715749    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:18 GMT
	I1028 11:09:18.715749    7696 round_trippers.go:580]     Audit-Id: 56f68c7b-9f8f-4ebb-b06c-7f035aca7515
	I1028 11:09:18.715749    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:18.715749    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:18.716569    7696 pod_ready.go:93] pod "coredns-7c65d6cfc9-bbbsr" in "kube-system" namespace has status "Ready":"True"
	I1028 11:09:18.716632    7696 pod_ready.go:82] duration metric: took 2.5126246s for pod "coredns-7c65d6cfc9-bbbsr" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:18.716632    7696 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-150200" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:18.716790    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:18.716790    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:18.716790    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:18.716790    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:18.720077    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:18.720077    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:18.720077    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:18.720077    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:18.720077    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:18.720077    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:18 GMT
	I1028 11:09:18.720077    7696 round_trippers.go:580]     Audit-Id: 0d25fd07-6772-43a3-a6a2-4762a691ea93
	I1028 11:09:18.720077    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:18.720077    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I1028 11:09:18.721000    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:18.721254    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:18.721254    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:18.721254    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:18.723650    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:18.723650    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:18.723650    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:18 GMT
	I1028 11:09:18.723650    7696 round_trippers.go:580]     Audit-Id: 3c0c5db5-1157-439c-ad52-e65450a21e77
	I1028 11:09:18.723650    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:18.723650    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:18.723650    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:18.723650    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:18.723650    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:19.217643    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:19.217643    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:19.217643    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:19.217643    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:19.221756    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:19.221756    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:19.221756    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:19.221756    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:19.221846    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:19 GMT
	I1028 11:09:19.221846    7696 round_trippers.go:580]     Audit-Id: 1c95afdb-1ae4-4c76-9d06-35da6d5cb194
	I1028 11:09:19.221927    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:19.221927    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:19.221927    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I1028 11:09:19.222858    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:19.222858    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:19.222858    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:19.222858    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:19.225582    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:19.226282    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:19.226282    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:19 GMT
	I1028 11:09:19.226282    7696 round_trippers.go:580]     Audit-Id: 0ca47f97-86ca-46e7-a31d-73cbe56d8fa7
	I1028 11:09:19.226282    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:19.226282    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:19.226361    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:19.226361    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:19.226626    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:19.716999    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:19.716999    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:19.716999    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:19.716999    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:19.722650    7696 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:09:19.722650    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:19.722650    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:19.722650    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:19.722650    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:19 GMT
	I1028 11:09:19.722650    7696 round_trippers.go:580]     Audit-Id: dffb3995-1614-4462-a39a-124d640b09e3
	I1028 11:09:19.722650    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:19.722650    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:19.722746    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I1028 11:09:19.723813    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:19.723813    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:19.723813    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:19.723905    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:19.730694    7696 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:09:19.730694    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:19.730694    7696 round_trippers.go:580]     Audit-Id: badb97cc-fcdf-4dc9-9c4e-7555a61c8acf
	I1028 11:09:19.730757    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:19.730757    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:19.730779    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:19.730779    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:19.730779    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:19 GMT
	I1028 11:09:19.731039    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:20.216799    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:20.216799    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:20.216799    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:20.216799    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:20.221924    7696 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:09:20.222009    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:20.222009    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:20.222009    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:20 GMT
	I1028 11:09:20.222102    7696 round_trippers.go:580]     Audit-Id: b32b5641-d5e3-4057-8643-b7c2055cdf6d
	I1028 11:09:20.222102    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:20.222102    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:20.222102    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:20.222207    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I1028 11:09:20.223168    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:20.223168    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:20.223168    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:20.223168    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:20.226911    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:20.226911    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:20.226911    7696 round_trippers.go:580]     Audit-Id: 81fca5c1-9a6f-4b70-b7ad-106d41deff64
	I1028 11:09:20.226978    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:20.226978    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:20.226978    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:20.226978    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:20.226978    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:20 GMT
	I1028 11:09:20.228443    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:20.717396    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:20.717396    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:20.717396    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:20.717396    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:20.721089    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:20.721089    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:20.721089    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:20.721089    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:20 GMT
	I1028 11:09:20.721089    7696 round_trippers.go:580]     Audit-Id: 4be4f69b-8765-4baf-97eb-7085e379078c
	I1028 11:09:20.721089    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:20.721089    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:20.721089    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:20.722385    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I1028 11:09:20.722641    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:20.722641    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:20.722641    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:20.722641    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:20.728725    7696 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:09:20.728725    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:20.728725    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:20 GMT
	I1028 11:09:20.728725    7696 round_trippers.go:580]     Audit-Id: 2c37cbf8-2cf4-4369-a819-f360632a6512
	I1028 11:09:20.728725    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:20.728725    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:20.728725    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:20.728725    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:20.728725    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:20.729336    7696 pod_ready.go:103] pod "etcd-functional-150200" in "kube-system" namespace has status "Ready":"False"
	I1028 11:09:21.216908    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:21.216908    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:21.216908    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:21.216908    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:21.223039    7696 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:09:21.223039    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:21.223039    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:21 GMT
	I1028 11:09:21.223039    7696 round_trippers.go:580]     Audit-Id: a853b542-0f2f-4f08-8cc5-1a60564b76b2
	I1028 11:09:21.223039    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:21.223039    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:21.223039    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:21.223143    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:21.223354    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I1028 11:09:21.223975    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:21.223975    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:21.223975    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:21.223975    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:21.227357    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:21.227357    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:21.227357    7696 round_trippers.go:580]     Audit-Id: 56a7491b-4586-493e-9a44-923f49a9181e
	I1028 11:09:21.227357    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:21.227357    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:21.227357    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:21.227357    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:21.227357    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:21 GMT
	I1028 11:09:21.227357    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:21.716828    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:21.716828    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:21.716828    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:21.716828    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:21.721692    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:21.721817    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:21.721817    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:21.721817    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:21.721817    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:21.721975    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:21 GMT
	I1028 11:09:21.721975    7696 round_trippers.go:580]     Audit-Id: fb57942f-644d-44a2-a139-8eb251ccb4f3
	I1028 11:09:21.721975    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:21.721975    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I1028 11:09:21.723204    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:21.723283    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:21.723283    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:21.723283    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:21.725626    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:21.725626    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:21.725626    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:21.725626    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:21 GMT
	I1028 11:09:21.725626    7696 round_trippers.go:580]     Audit-Id: 8fa9ae18-3162-4de6-a07f-dc05879a012b
	I1028 11:09:21.725626    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:21.725626    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:21.725626    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:21.726752    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:22.217721    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:22.217805    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:22.217805    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:22.217805    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:22.225065    7696 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:09:22.225065    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:22.225065    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:22.225065    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:22.225065    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:22 GMT
	I1028 11:09:22.225065    7696 round_trippers.go:580]     Audit-Id: 074c7512-b98d-47a4-a3f3-6367b47aa82a
	I1028 11:09:22.225065    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:22.225065    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:22.225977    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I1028 11:09:22.226799    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:22.226799    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:22.226799    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:22.226799    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:22.229778    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:22.229778    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:22.229778    7696 round_trippers.go:580]     Audit-Id: dae03398-2046-4bd5-af47-1675345d8e73
	I1028 11:09:22.229778    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:22.229778    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:22.229778    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:22.229778    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:22.229778    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:22 GMT
	I1028 11:09:22.229778    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:22.717106    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:22.717106    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:22.717106    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:22.717106    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:22.721490    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:22.721863    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:22.721863    7696 round_trippers.go:580]     Audit-Id: 41ffcd67-932e-4164-9f95-4f44c2eebc3c
	I1028 11:09:22.721863    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:22.721863    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:22.721863    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:22.721863    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:22.721863    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:22 GMT
	I1028 11:09:22.722400    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I1028 11:09:22.723274    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:22.723338    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:22.723338    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:22.723338    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:22.726302    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:22.726302    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:22.726302    7696 round_trippers.go:580]     Audit-Id: 6e2f4479-6e29-4686-8116-081d5883c748
	I1028 11:09:22.726302    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:22.726302    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:22.726302    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:22.726302    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:22.726302    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:22 GMT
	I1028 11:09:22.735896    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:22.736186    7696 pod_ready.go:103] pod "etcd-functional-150200" in "kube-system" namespace has status "Ready":"False"
	I1028 11:09:23.217520    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:23.217520    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:23.217520    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:23.217520    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:23.222323    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:23.222323    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:23.222444    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:23.222444    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:23.222444    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:23.222444    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:23 GMT
	I1028 11:09:23.222444    7696 round_trippers.go:580]     Audit-Id: 33e3501d-5589-4e88-b45d-328519632d57
	I1028 11:09:23.222444    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:23.222658    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I1028 11:09:23.223052    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:23.223052    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:23.223052    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:23.223052    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:23.228627    7696 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:09:23.228673    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:23.228673    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:23.228723    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:23.228723    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:23 GMT
	I1028 11:09:23.228723    7696 round_trippers.go:580]     Audit-Id: 4f0cb068-8763-467e-8ed9-c9e14988428e
	I1028 11:09:23.228723    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:23.228723    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:23.228756    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:23.717113    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:23.717113    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:23.717113    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:23.717113    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:23.721785    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:23.721785    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:23.721785    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:23.721785    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:23.721892    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:23.721892    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:23 GMT
	I1028 11:09:23.721892    7696 round_trippers.go:580]     Audit-Id: 4b78302c-0965-4fa1-8277-197808229a61
	I1028 11:09:23.721892    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:23.722210    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I1028 11:09:23.723054    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:23.723112    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:23.723112    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:23.723184    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:23.726351    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:23.726351    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:23.726422    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:23.726422    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:23.726422    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:23.726422    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:23.726422    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:23 GMT
	I1028 11:09:23.726422    7696 round_trippers.go:580]     Audit-Id: ffbb5d1d-2796-4a52-8b51-cc6516d03e09
	I1028 11:09:23.726761    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:24.217404    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:24.217466    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:24.217466    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:24.217466    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:24.222626    7696 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:09:24.222626    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:24.222626    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:24.222626    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:24 GMT
	I1028 11:09:24.222626    7696 round_trippers.go:580]     Audit-Id: 1bebfe13-81ce-4fa7-963d-46845b27e310
	I1028 11:09:24.222626    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:24.222626    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:24.222626    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:24.222626    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I1028 11:09:24.223908    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:24.223908    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:24.223908    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:24.223908    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:24.229015    7696 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:09:24.229015    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:24.229015    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:24.229015    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:24.229015    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:24 GMT
	I1028 11:09:24.229015    7696 round_trippers.go:580]     Audit-Id: 840e4655-4bcc-4d69-a6b1-e916af287ae1
	I1028 11:09:24.229015    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:24.229015    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:24.229552    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:24.717785    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:24.717785    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:24.717785    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:24.717785    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:24.722444    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:24.722804    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:24.722865    7696 round_trippers.go:580]     Audit-Id: f71fe031-eb0a-4722-a256-0583a749048e
	I1028 11:09:24.722865    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:24.722865    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:24.722865    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:24.722947    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:24.722947    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:24 GMT
	I1028 11:09:24.723194    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I1028 11:09:24.724122    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:24.724122    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:24.724177    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:24.724177    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:24.727503    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:24.727503    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:24.727503    7696 round_trippers.go:580]     Audit-Id: 8867e364-bbfd-4170-a9a0-ec458468f965
	I1028 11:09:24.727503    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:24.727503    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:24.727503    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:24.727503    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:24.727503    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:24 GMT
	I1028 11:09:24.727503    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:25.217352    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:25.217352    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:25.217352    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:25.217352    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:25.222307    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:25.222371    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:25.222371    7696 round_trippers.go:580]     Audit-Id: 6ca266f2-b6fe-44fd-a360-b652de9314e0
	I1028 11:09:25.222432    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:25.222432    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:25.222432    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:25.222432    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:25.222432    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:25 GMT
	I1028 11:09:25.222711    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I1028 11:09:25.223528    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:25.223528    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:25.223528    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:25.223528    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:25.226360    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:25.226607    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:25.226607    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:25.226607    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:25.226607    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:25 GMT
	I1028 11:09:25.226607    7696 round_trippers.go:580]     Audit-Id: 6a247832-636b-4da3-a23e-f81ffb625636
	I1028 11:09:25.226607    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:25.226607    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:25.226934    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:25.227476    7696 pod_ready.go:103] pod "etcd-functional-150200" in "kube-system" namespace has status "Ready":"False"
	I1028 11:09:25.716832    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:25.716832    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:25.716832    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:25.716832    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:25.722506    7696 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:09:25.723640    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:25.723928    7696 round_trippers.go:580]     Audit-Id: b4ca15a5-15ca-4a17-bd00-b85109dbe5d1
	I1028 11:09:25.724093    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:25.724093    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:25.724093    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:25.724093    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:25.724093    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:25 GMT
	I1028 11:09:25.724093    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I1028 11:09:25.726276    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:25.726276    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:25.726276    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:25.726276    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:25.729831    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:25.729831    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:25.729892    7696 round_trippers.go:580]     Audit-Id: 2c4e814e-79e3-4866-adeb-060dd204e10c
	I1028 11:09:25.729892    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:25.729892    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:25.729892    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:25.729892    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:25.729892    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:25 GMT
	I1028 11:09:25.730647    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:26.216843    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:26.216843    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:26.216843    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:26.216843    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:26.222433    7696 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:09:26.222433    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:26.222433    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:26.222433    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:26.222433    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:26.222433    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:26.222433    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:26 GMT
	I1028 11:09:26.222433    7696 round_trippers.go:580]     Audit-Id: 169bd3b8-996e-417a-a47f-1c0a53713ac2
	I1028 11:09:26.222433    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I1028 11:09:26.223461    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:26.223532    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:26.223532    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:26.223532    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:26.226936    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:26.226936    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:26.226936    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:26.226936    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:26.226936    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:26.226936    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:26 GMT
	I1028 11:09:26.226936    7696 round_trippers.go:580]     Audit-Id: 5fcd31cd-4ea9-41c3-8128-4a59911ea412
	I1028 11:09:26.226936    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:26.227475    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:26.716867    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:26.716867    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:26.716867    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:26.716867    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:26.721985    7696 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:09:26.722074    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:26.722074    7696 round_trippers.go:580]     Audit-Id: eba0b2ac-bfee-4f54-b9ec-02725f49ad44
	I1028 11:09:26.722074    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:26.722074    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:26.722074    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:26.722074    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:26.722074    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:26 GMT
	I1028 11:09:26.722363    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"535","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I1028 11:09:26.722982    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:26.722982    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:26.723055    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:26.723055    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:26.725862    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:26.725862    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:26.725862    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:26.725862    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:26 GMT
	I1028 11:09:26.725957    7696 round_trippers.go:580]     Audit-Id: f3228eaa-03ad-41ba-aca3-abe3ad3a2d5c
	I1028 11:09:26.725957    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:26.725957    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:26.725957    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:26.726217    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:27.217776    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:27.217776    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:27.217776    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:27.217776    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:27.222500    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:27.222500    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:27.222500    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:27.222500    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:27.222500    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:27.222500    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:27 GMT
	I1028 11:09:27.222500    7696 round_trippers.go:580]     Audit-Id: 5ad94062-98f0-4819-96ed-afc8e724ec1e
	I1028 11:09:27.222500    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:27.222500    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"581","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6686 chars]
	I1028 11:09:27.223601    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:27.223601    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:27.223601    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:27.223674    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:27.226192    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:27.226360    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:27.226360    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:27.226360    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:27.226360    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:27 GMT
	I1028 11:09:27.226435    7696 round_trippers.go:580]     Audit-Id: 40b437e1-08a1-4ab8-94eb-939a006efc58
	I1028 11:09:27.226435    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:27.226435    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:27.226590    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:27.227177    7696 pod_ready.go:93] pod "etcd-functional-150200" in "kube-system" namespace has status "Ready":"True"
	I1028 11:09:27.227177    7696 pod_ready.go:82] duration metric: took 8.5104524s for pod "etcd-functional-150200" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:27.227177    7696 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-150200" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:27.227485    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-150200
	I1028 11:09:27.227485    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:27.227485    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:27.227485    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:27.230757    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:27.230757    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:27.230757    7696 round_trippers.go:580]     Audit-Id: 434ccd3c-cffc-4607-aa6d-4287ab89a8cd
	I1028 11:09:27.230757    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:27.230757    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:27.230757    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:27.230757    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:27.230757    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:27 GMT
	I1028 11:09:27.231139    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-150200","namespace":"kube-system","uid":"91d76a57-02b8-416d-a13d-8d1b3d78c0ca","resourceVersion":"574","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.250.220:8441","kubernetes.io/config.hash":"447f0c441c6f19d7224e1aaa3d43970f","kubernetes.io/config.mirror":"447f0c441c6f19d7224e1aaa3d43970f","kubernetes.io/config.seen":"2024-10-28T11:07:08.851675069Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 7912 chars]
	I1028 11:09:27.232042    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:27.232071    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:27.232112    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:27.232112    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:27.235182    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:27.235182    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:27.235182    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:27 GMT
	I1028 11:09:27.235182    7696 round_trippers.go:580]     Audit-Id: 6a8b141c-bdab-4413-abf4-1afde49fab65
	I1028 11:09:27.235182    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:27.235182    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:27.235182    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:27.235182    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:27.235182    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:27.235941    7696 pod_ready.go:93] pod "kube-apiserver-functional-150200" in "kube-system" namespace has status "Ready":"True"
	I1028 11:09:27.235941    7696 pod_ready.go:82] duration metric: took 8.7636ms for pod "kube-apiserver-functional-150200" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:27.235941    7696 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-150200" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:27.235941    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-150200
	I1028 11:09:27.235941    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:27.235941    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:27.235941    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:27.238679    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:27.239083    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:27.239083    7696 round_trippers.go:580]     Audit-Id: 6414efcd-816f-4ef6-aa13-45928936fc3c
	I1028 11:09:27.239083    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:27.239083    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:27.239083    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:27.239083    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:27.239083    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:27 GMT
	I1028 11:09:27.239250    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-150200","namespace":"kube-system","uid":"74b7db98-fcff-4451-b704-f889d93fec74","resourceVersion":"527","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5375540f1aa977b0c3d6fb04222e8c20","kubernetes.io/config.mirror":"5375540f1aa977b0c3d6fb04222e8c20","kubernetes.io/config.seen":"2024-10-28T11:07:08.851681069Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7739 chars]
	I1028 11:09:27.240092    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:27.240092    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:27.240092    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:27.240092    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:27.242771    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:27.242862    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:27.242862    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:27.242862    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:27 GMT
	I1028 11:09:27.242862    7696 round_trippers.go:580]     Audit-Id: 793e2637-2ec0-4d92-959c-77e5e6f22276
	I1028 11:09:27.242862    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:27.242862    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:27.242862    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:27.242970    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:27.736824    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-150200
	I1028 11:09:27.736824    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:27.736824    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:27.736824    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:27.741285    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:27.741285    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:27.741285    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:27.741285    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:27 GMT
	I1028 11:09:27.741285    7696 round_trippers.go:580]     Audit-Id: 406d801f-b3c7-4c22-9f44-145f69b25663
	I1028 11:09:27.741285    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:27.741285    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:27.741285    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:27.741285    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-150200","namespace":"kube-system","uid":"74b7db98-fcff-4451-b704-f889d93fec74","resourceVersion":"527","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5375540f1aa977b0c3d6fb04222e8c20","kubernetes.io/config.mirror":"5375540f1aa977b0c3d6fb04222e8c20","kubernetes.io/config.seen":"2024-10-28T11:07:08.851681069Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7739 chars]
	I1028 11:09:27.742030    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:27.742030    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:27.742030    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:27.742030    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:27.745231    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:27.745231    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:27.745231    7696 round_trippers.go:580]     Audit-Id: 7ad1857e-fa81-4a66-acd5-ddd55a4cab48
	I1028 11:09:27.745294    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:27.745294    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:27.745294    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:27.745294    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:27.745294    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:27 GMT
	I1028 11:09:27.745508    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:28.237390    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-150200
	I1028 11:09:28.237390    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:28.237390    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:28.237390    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:28.241673    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:28.241673    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:28.241673    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:28.241673    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:28 GMT
	I1028 11:09:28.241673    7696 round_trippers.go:580]     Audit-Id: bbe7349e-61ee-4308-8038-5c532cd24137
	I1028 11:09:28.241673    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:28.241673    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:28.241673    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:28.242204    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-150200","namespace":"kube-system","uid":"74b7db98-fcff-4451-b704-f889d93fec74","resourceVersion":"527","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5375540f1aa977b0c3d6fb04222e8c20","kubernetes.io/config.mirror":"5375540f1aa977b0c3d6fb04222e8c20","kubernetes.io/config.seen":"2024-10-28T11:07:08.851681069Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7739 chars]
	I1028 11:09:28.243103    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:28.243103    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:28.243103    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:28.243103    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:28.246469    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:28.246469    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:28.246469    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:28.246469    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:28.246469    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:28 GMT
	I1028 11:09:28.246557    7696 round_trippers.go:580]     Audit-Id: d443257f-2a6a-4ac1-b266-ab2a6dd5f30a
	I1028 11:09:28.246557    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:28.246557    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:28.246842    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:28.737001    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-150200
	I1028 11:09:28.737001    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:28.737092    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:28.737092    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:28.740688    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:28.740775    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:28.740775    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:28 GMT
	I1028 11:09:28.740775    7696 round_trippers.go:580]     Audit-Id: c2b78b11-a4c6-48ca-8f75-a89e7747aabc
	I1028 11:09:28.740775    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:28.740775    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:28.740775    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:28.740775    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:28.741136    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-150200","namespace":"kube-system","uid":"74b7db98-fcff-4451-b704-f889d93fec74","resourceVersion":"582","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5375540f1aa977b0c3d6fb04222e8c20","kubernetes.io/config.mirror":"5375540f1aa977b0c3d6fb04222e8c20","kubernetes.io/config.seen":"2024-10-28T11:07:08.851681069Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7738 chars]
	I1028 11:09:28.742044    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:28.742144    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:28.742144    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:28.742144    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:28.744664    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:28.744664    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:28.744664    7696 round_trippers.go:580]     Audit-Id: 2cbe1111-3e5a-4441-8039-947cd600fad4
	I1028 11:09:28.744664    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:28.744664    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:28.744745    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:28.744745    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:28.744745    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:28 GMT
	I1028 11:09:28.744865    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:29.236543    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-150200
	I1028 11:09:29.236543    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:29.236543    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:29.236543    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:29.241917    7696 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:09:29.241917    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:29.241991    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:29.241991    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:29.241991    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:29.241991    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:29.241991    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:29 GMT
	I1028 11:09:29.241991    7696 round_trippers.go:580]     Audit-Id: 93fed701-c75f-46a2-ac37-274fdd64fac2
	I1028 11:09:29.242168    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-150200","namespace":"kube-system","uid":"74b7db98-fcff-4451-b704-f889d93fec74","resourceVersion":"583","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5375540f1aa977b0c3d6fb04222e8c20","kubernetes.io/config.mirror":"5375540f1aa977b0c3d6fb04222e8c20","kubernetes.io/config.seen":"2024-10-28T11:07:08.851681069Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I1028 11:09:29.242950    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:29.242950    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:29.242950    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:29.242950    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:29.250585    7696 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:09:29.250585    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:29.250585    7696 round_trippers.go:580]     Audit-Id: 7d2b480a-ed58-4c29-a5d9-c50f40b86d7d
	I1028 11:09:29.250766    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:29.250766    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:29.250766    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:29.250766    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:29.250766    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:29 GMT
	I1028 11:09:29.251195    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:29.251343    7696 pod_ready.go:93] pod "kube-controller-manager-functional-150200" in "kube-system" namespace has status "Ready":"True"
	I1028 11:09:29.251343    7696 pod_ready.go:82] duration metric: took 2.0153803s for pod "kube-controller-manager-functional-150200" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:29.251343    7696 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-99k8l" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:29.251343    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/kube-proxy-99k8l
	I1028 11:09:29.251343    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:29.251343    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:29.251343    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:29.260926    7696 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:09:29.260926    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:29.260926    7696 round_trippers.go:580]     Audit-Id: faa81531-154b-4378-b93b-ef6f543f733b
	I1028 11:09:29.260926    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:29.260926    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:29.260926    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:29.260926    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:29.260926    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:29 GMT
	I1028 11:09:29.260926    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-99k8l","generateName":"kube-proxy-","namespace":"kube-system","uid":"77fef842-5652-4270-ac9e-53d0bc432778","resourceVersion":"559","creationTimestamp":"2024-10-28T11:07:14Z","labels":{"controller-revision-hash":"77987969cc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8bbe38f6-0a88-4efc-adc0-717fdada0d7b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8bbe38f6-0a88-4efc-adc0-717fdada0d7b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6406 chars]
	I1028 11:09:29.260926    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:29.260926    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:29.260926    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:29.260926    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:29.264500    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:29.264569    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:29.264569    7696 round_trippers.go:580]     Audit-Id: e2f1e1d0-4b0f-482e-a7f8-15bd0bdd36e0
	I1028 11:09:29.264569    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:29.264569    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:29.264569    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:29.264569    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:29.264569    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:29 GMT
	I1028 11:09:29.264685    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:29.264685    7696 pod_ready.go:93] pod "kube-proxy-99k8l" in "kube-system" namespace has status "Ready":"True"
	I1028 11:09:29.265248    7696 pod_ready.go:82] duration metric: took 13.9051ms for pod "kube-proxy-99k8l" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:29.265248    7696 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-150200" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:29.265373    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-150200
	I1028 11:09:29.265451    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:29.265451    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:29.265451    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:29.269771    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:29.269771    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:29.269771    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:29.269771    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:29 GMT
	I1028 11:09:29.269771    7696 round_trippers.go:580]     Audit-Id: 42bdcdcc-3e77-431f-aace-c7304bc31b3a
	I1028 11:09:29.269771    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:29.269771    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:29.269771    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:29.270037    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-150200","namespace":"kube-system","uid":"e86b3da4-c60d-4a99-8fa9-47e9c5a18934","resourceVersion":"579","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"89c0cf5200ccb41e6f151971da196681","kubernetes.io/config.mirror":"89c0cf5200ccb41e6f151971da196681","kubernetes.io/config.seen":"2024-10-28T11:07:08.851682369Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5207 chars]
	I1028 11:09:29.270666    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:29.270666    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:29.270666    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:29.270731    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:29.274347    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:29.275334    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:29.275334    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:29.275334    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:29.275334    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:29.275334    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:29 GMT
	I1028 11:09:29.275334    7696 round_trippers.go:580]     Audit-Id: dd4ecab8-9fd8-449f-9c3f-03dc3415255f
	I1028 11:09:29.275334    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:29.275640    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:29.275896    7696 pod_ready.go:93] pod "kube-scheduler-functional-150200" in "kube-system" namespace has status "Ready":"True"
	I1028 11:09:29.275896    7696 pod_ready.go:82] duration metric: took 10.6473ms for pod "kube-scheduler-functional-150200" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:29.275896    7696 pod_ready.go:39] duration metric: took 13.0817196s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:09:29.275896    7696 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 11:09:29.307214    7696 command_runner.go:130] > -16
	I1028 11:09:29.307214    7696 ops.go:34] apiserver oom_adj: -16
	I1028 11:09:29.307367    7696 kubeadm.go:597] duration metric: took 24.8208986s to restartPrimaryControlPlane
	I1028 11:09:29.307367    7696 kubeadm.go:394] duration metric: took 24.9545998s to StartCluster
	I1028 11:09:29.307436    7696 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:09:29.307735    7696 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 11:09:29.308977    7696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:09:29.310932    7696 start.go:235] Will wait 6m0s for node &{Name: IP:172.27.250.220 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:09:29.310932    7696 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 11:09:29.311153    7696 addons.go:69] Setting storage-provisioner=true in profile "functional-150200"
	I1028 11:09:29.311153    7696 addons.go:69] Setting default-storageclass=true in profile "functional-150200"
	I1028 11:09:29.311255    7696 addons.go:234] Setting addon storage-provisioner=true in "functional-150200"
	I1028 11:09:29.311363    7696 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:09:29.311459    7696 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-150200"
	W1028 11:09:29.311459    7696 addons.go:243] addon storage-provisioner should already be in state true
	I1028 11:09:29.311988    7696 host.go:66] Checking if "functional-150200" exists ...
	I1028 11:09:29.312871    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:09:29.313607    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:09:29.313968    7696 out.go:177] * Verifying Kubernetes components...
	I1028 11:09:29.331611    7696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:09:29.626409    7696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:09:29.655090    7696 node_ready.go:35] waiting up to 6m0s for node "functional-150200" to be "Ready" ...
	I1028 11:09:29.655339    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:29.655339    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:29.655420    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:29.655420    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:29.659938    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:29.660049    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:29.660131    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:29.660131    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:29 GMT
	I1028 11:09:29.660131    7696 round_trippers.go:580]     Audit-Id: fbbac3bd-0f4e-4fa5-abbe-b18978f82f04
	I1028 11:09:29.660131    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:29.660131    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:29.660247    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:29.660855    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:29.661649    7696 node_ready.go:49] node "functional-150200" has status "Ready":"True"
	I1028 11:09:29.661763    7696 node_ready.go:38] duration metric: took 6.5557ms for node "functional-150200" to be "Ready" ...
	I1028 11:09:29.661763    7696 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:09:29.661809    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods
	I1028 11:09:29.661809    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:29.661809    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:29.661809    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:29.667036    7696 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:09:29.667036    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:29.667036    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:29.667036    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:29 GMT
	I1028 11:09:29.667036    7696 round_trippers.go:580]     Audit-Id: 17d539c0-2ea1-4dc9-aa38-845db5cb8dcc
	I1028 11:09:29.667036    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:29.667036    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:29.667036    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:29.667770    7696 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"583"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-bbbsr","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2c1be340-9d91-4d11-b776-a17e2a7409d0","resourceVersion":"568","creationTimestamp":"2024-10-28T11:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"97488db9-ee6b-451d-bea8-cb0994714fd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97488db9-ee6b-451d-bea8-cb0994714fd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51250 chars]
	I1028 11:09:29.669773    7696 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bbbsr" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:29.670778    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bbbsr
	I1028 11:09:29.670778    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:29.670778    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:29.670778    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:29.673772    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:29.673772    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:29.673772    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:29.673772    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:29.673772    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:29.673772    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:29.673772    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:29 GMT
	I1028 11:09:29.673772    7696 round_trippers.go:580]     Audit-Id: 087a9789-70a7-472b-8a48-9c9f59123199
	I1028 11:09:29.674767    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-bbbsr","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2c1be340-9d91-4d11-b776-a17e2a7409d0","resourceVersion":"568","creationTimestamp":"2024-10-28T11:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"97488db9-ee6b-451d-bea8-cb0994714fd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97488db9-ee6b-451d-bea8-cb0994714fd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6704 chars]
	I1028 11:09:29.674767    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:29.674767    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:29.674767    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:29.674767    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:29.678773    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:29.678773    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:29.678773    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:29.678773    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:29 GMT
	I1028 11:09:29.678773    7696 round_trippers.go:580]     Audit-Id: b7c14703-7517-4d35-a252-0d0be16e10b7
	I1028 11:09:29.678773    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:29.678773    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:29.678773    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:29.678773    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:29.679777    7696 pod_ready.go:93] pod "coredns-7c65d6cfc9-bbbsr" in "kube-system" namespace has status "Ready":"True"
	I1028 11:09:29.679777    7696 pod_ready.go:82] duration metric: took 10.0038ms for pod "coredns-7c65d6cfc9-bbbsr" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:29.679777    7696 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-150200" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:29.818782    7696 request.go:632] Waited for 139.0037ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:29.818782    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/etcd-functional-150200
	I1028 11:09:29.818782    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:29.818782    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:29.818782    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:29.824763    7696 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:09:29.824960    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:29.824960    7696 round_trippers.go:580]     Audit-Id: 4315f817-76b5-4ed8-979d-33ed7f3a4f38
	I1028 11:09:29.824960    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:29.824960    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:29.824960    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:29.824960    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:29.824960    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:29 GMT
	I1028 11:09:29.825773    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-150200","namespace":"kube-system","uid":"deeea244-f2b0-4060-b0c7-c882b8edf88d","resourceVersion":"581","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.250.220:2379","kubernetes.io/config.hash":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.mirror":"c17aafe1f08872dfacd6ecb272a109a8","kubernetes.io/config.seen":"2024-10-28T11:07:08.851683769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6686 chars]
	I1028 11:09:30.018885    7696 request.go:632] Waited for 192.1079ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:30.018885    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:30.018885    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:30.018885    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:30.018885    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:30.022874    7696 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:09:30.023640    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:30.023640    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:30.023640    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:30.023640    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:30.023640    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:30 GMT
	I1028 11:09:30.023640    7696 round_trippers.go:580]     Audit-Id: ac66d471-a38c-43db-b923-5d6fe53a801a
	I1028 11:09:30.023640    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:30.024273    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:30.024732    7696 pod_ready.go:93] pod "etcd-functional-150200" in "kube-system" namespace has status "Ready":"True"
	I1028 11:09:30.024732    7696 pod_ready.go:82] duration metric: took 344.9512ms for pod "etcd-functional-150200" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:30.024732    7696 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-150200" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:30.217969    7696 request.go:632] Waited for 193.2355ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-150200
	I1028 11:09:30.217969    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-150200
	I1028 11:09:30.217969    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:30.217969    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:30.217969    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:30.222820    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:30.222820    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:30.222820    7696 round_trippers.go:580]     Audit-Id: 97d79a69-d262-457f-aef1-f0c3a10b359e
	I1028 11:09:30.222820    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:30.222820    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:30.222820    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:30.222820    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:30.222820    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:30 GMT
	I1028 11:09:30.223340    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-150200","namespace":"kube-system","uid":"91d76a57-02b8-416d-a13d-8d1b3d78c0ca","resourceVersion":"574","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.250.220:8441","kubernetes.io/config.hash":"447f0c441c6f19d7224e1aaa3d43970f","kubernetes.io/config.mirror":"447f0c441c6f19d7224e1aaa3d43970f","kubernetes.io/config.seen":"2024-10-28T11:07:08.851675069Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 7912 chars]
	I1028 11:09:30.418261    7696 request.go:632] Waited for 194.055ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:30.418261    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:30.418261    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:30.418261    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:30.418261    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:30.422535    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:30.422571    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:30.422571    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:30.422571    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:30.422571    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:30.422571    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:30.422571    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:30 GMT
	I1028 11:09:30.422571    7696 round_trippers.go:580]     Audit-Id: 8cbd2d0d-0d72-4d9c-8328-4afe47c3e8d3
	I1028 11:09:30.422868    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:30.423416    7696 pod_ready.go:93] pod "kube-apiserver-functional-150200" in "kube-system" namespace has status "Ready":"True"
	I1028 11:09:30.423416    7696 pod_ready.go:82] duration metric: took 398.6802ms for pod "kube-apiserver-functional-150200" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:30.423416    7696 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-150200" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:30.618023    7696 request.go:632] Waited for 194.605ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-150200
	I1028 11:09:30.618023    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-150200
	I1028 11:09:30.618023    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:30.618023    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:30.618023    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:30.622455    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:30.622567    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:30.622567    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:30.622567    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:30 GMT
	I1028 11:09:30.622567    7696 round_trippers.go:580]     Audit-Id: f15f1573-8a3d-4b6f-8ac4-d92e93665008
	I1028 11:09:30.622567    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:30.622567    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:30.622567    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:30.622872    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-150200","namespace":"kube-system","uid":"74b7db98-fcff-4451-b704-f889d93fec74","resourceVersion":"583","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5375540f1aa977b0c3d6fb04222e8c20","kubernetes.io/config.mirror":"5375540f1aa977b0c3d6fb04222e8c20","kubernetes.io/config.seen":"2024-10-28T11:07:08.851681069Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I1028 11:09:30.818067    7696 request.go:632] Waited for 194.5453ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:30.818067    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:30.818067    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:30.818067    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:30.818067    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:30.822071    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:30.822071    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:30.822071    7696 round_trippers.go:580]     Audit-Id: 5cf56c2c-7a93-4593-8e78-4c9d73880baf
	I1028 11:09:30.822071    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:30.822071    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:30.822071    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:30.822071    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:30.822071    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:30 GMT
	I1028 11:09:30.822071    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:30.823064    7696 pod_ready.go:93] pod "kube-controller-manager-functional-150200" in "kube-system" namespace has status "Ready":"True"
	I1028 11:09:30.823064    7696 pod_ready.go:82] duration metric: took 399.644ms for pod "kube-controller-manager-functional-150200" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:30.823064    7696 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-99k8l" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:31.018202    7696 request.go:632] Waited for 195.1359ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/kube-proxy-99k8l
	I1028 11:09:31.018202    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/kube-proxy-99k8l
	I1028 11:09:31.018202    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:31.018202    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:31.018202    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:31.026164    7696 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:09:31.027072    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:31.027072    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:31.027072    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:31.027072    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:31.027226    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:31.027226    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:31 GMT
	I1028 11:09:31.027226    7696 round_trippers.go:580]     Audit-Id: bcdd4d31-340a-47b1-8427-c0413e8c113f
	I1028 11:09:31.027549    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-99k8l","generateName":"kube-proxy-","namespace":"kube-system","uid":"77fef842-5652-4270-ac9e-53d0bc432778","resourceVersion":"559","creationTimestamp":"2024-10-28T11:07:14Z","labels":{"controller-revision-hash":"77987969cc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8bbe38f6-0a88-4efc-adc0-717fdada0d7b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8bbe38f6-0a88-4efc-adc0-717fdada0d7b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6406 chars]
	I1028 11:09:31.218370    7696 request.go:632] Waited for 190.1009ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:31.218370    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:31.218370    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:31.218370    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:31.218370    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:31.222754    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:31.222754    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:31.222754    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:31 GMT
	I1028 11:09:31.222754    7696 round_trippers.go:580]     Audit-Id: 7ac0d7f5-4e0a-4ec7-b206-ebc450a8ed75
	I1028 11:09:31.222754    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:31.222754    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:31.222754    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:31.222754    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:31.223073    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:31.224101    7696 pod_ready.go:93] pod "kube-proxy-99k8l" in "kube-system" namespace has status "Ready":"True"
	I1028 11:09:31.224201    7696 pod_ready.go:82] duration metric: took 401.1322ms for pod "kube-proxy-99k8l" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:31.224201    7696 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-150200" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:31.418911    7696 request.go:632] Waited for 194.6079ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-150200
	I1028 11:09:31.418911    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-150200
	I1028 11:09:31.418911    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:31.418911    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:31.418911    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:31.423528    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:31.423593    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:31.423692    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:31.423692    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:31.423692    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:31.423692    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:31 GMT
	I1028 11:09:31.423692    7696 round_trippers.go:580]     Audit-Id: a5b1cd49-d9fa-43fe-84a9-b419e5727e92
	I1028 11:09:31.423763    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:31.425272    7696 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-150200","namespace":"kube-system","uid":"e86b3da4-c60d-4a99-8fa9-47e9c5a18934","resourceVersion":"579","creationTimestamp":"2024-10-28T11:07:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"89c0cf5200ccb41e6f151971da196681","kubernetes.io/config.mirror":"89c0cf5200ccb41e6f151971da196681","kubernetes.io/config.seen":"2024-10-28T11:07:08.851682369Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5207 chars]
	I1028 11:09:31.618817    7696 request.go:632] Waited for 190.2662ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:31.618817    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes/functional-150200
	I1028 11:09:31.618817    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:31.618817    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:31.618817    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:31.626626    7696 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:09:31.626626    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:31.626626    7696 round_trippers.go:580]     Audit-Id: 46e071f8-6e6b-4c45-98dc-365027c2e6a5
	I1028 11:09:31.626626    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:31.626626    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:31.626626    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:31.626626    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:31.626626    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:31 GMT
	I1028 11:09:31.626626    7696 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:07:05Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I1028 11:09:31.627384    7696 pod_ready.go:93] pod "kube-scheduler-functional-150200" in "kube-system" namespace has status "Ready":"True"
	I1028 11:09:31.627384    7696 pod_ready.go:82] duration metric: took 403.1786ms for pod "kube-scheduler-functional-150200" in "kube-system" namespace to be "Ready" ...
	I1028 11:09:31.627384    7696 pod_ready.go:39] duration metric: took 1.9656001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:09:31.627384    7696 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:09:31.640458    7696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:09:31.658777    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:09:31.658843    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:09:31.660001    7696 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 11:09:31.660757    7696 kapi.go:59] client config for functional-150200: &rest.Config{Host:"https://172.27.250.220:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29cb3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 11:09:31.661735    7696 addons.go:234] Setting addon default-storageclass=true in "functional-150200"
	W1028 11:09:31.661735    7696 addons.go:243] addon default-storageclass should already be in state true
	I1028 11:09:31.661735    7696 host.go:66] Checking if "functional-150200" exists ...
	I1028 11:09:31.663118    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:09:31.674027    7696 command_runner.go:130] > 5703
	I1028 11:09:31.674027    7696 api_server.go:72] duration metric: took 2.3629744s to wait for apiserver process to appear ...
	I1028 11:09:31.674841    7696 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:09:31.674841    7696 api_server.go:253] Checking apiserver healthz at https://172.27.250.220:8441/healthz ...
	I1028 11:09:31.684596    7696 api_server.go:279] https://172.27.250.220:8441/healthz returned 200:
	ok
	I1028 11:09:31.684596    7696 round_trippers.go:463] GET https://172.27.250.220:8441/version
	I1028 11:09:31.684596    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:31.684596    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:31.684596    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:31.685483    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:09:31.685811    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:09:31.686943    7696 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:09:31.687122    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:31.687122    7696 round_trippers.go:580]     Content-Length: 263
	I1028 11:09:31.687122    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:31 GMT
	I1028 11:09:31.687226    7696 round_trippers.go:580]     Audit-Id: cb9e5b20-e714-4d8d-a5dd-828cb6c91f27
	I1028 11:09:31.687226    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:31.687226    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:31.687358    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:31.687358    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:31.687358    7696 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.2",
	  "gitCommit": "5864a4677267e6adeae276ad85882a8714d69d9d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-10-22T20:28:14Z",
	  "goVersion": "go1.22.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1028 11:09:31.687358    7696 api_server.go:141] control plane version: v1.31.2
	I1028 11:09:31.687500    7696 api_server.go:131] duration metric: took 12.6587ms to wait for apiserver health ...
	I1028 11:09:31.687500    7696 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:09:31.691037    7696 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:09:31.693524    7696 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:09:31.693524    7696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:09:31.693524    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:09:31.818858    7696 request.go:632] Waited for 131.232ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods
	I1028 11:09:31.818858    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods
	I1028 11:09:31.818858    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:31.818858    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:31.818858    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:31.824364    7696 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:09:31.824364    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:31.824364    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:31.824364    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:31.824364    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:31.824364    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:31.824364    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:31 GMT
	I1028 11:09:31.824364    7696 round_trippers.go:580]     Audit-Id: 41176b6b-bd34-43c6-929a-c2ef62c5be5e
	I1028 11:09:31.825970    7696 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"583"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-bbbsr","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2c1be340-9d91-4d11-b776-a17e2a7409d0","resourceVersion":"568","creationTimestamp":"2024-10-28T11:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"97488db9-ee6b-451d-bea8-cb0994714fd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97488db9-ee6b-451d-bea8-cb0994714fd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51250 chars]
	I1028 11:09:31.830400    7696 system_pods.go:59] 7 kube-system pods found
	I1028 11:09:31.830483    7696 system_pods.go:61] "coredns-7c65d6cfc9-bbbsr" [2c1be340-9d91-4d11-b776-a17e2a7409d0] Running
	I1028 11:09:31.830483    7696 system_pods.go:61] "etcd-functional-150200" [deeea244-f2b0-4060-b0c7-c882b8edf88d] Running
	I1028 11:09:31.830483    7696 system_pods.go:61] "kube-apiserver-functional-150200" [91d76a57-02b8-416d-a13d-8d1b3d78c0ca] Running
	I1028 11:09:31.830483    7696 system_pods.go:61] "kube-controller-manager-functional-150200" [74b7db98-fcff-4451-b704-f889d93fec74] Running
	I1028 11:09:31.830593    7696 system_pods.go:61] "kube-proxy-99k8l" [77fef842-5652-4270-ac9e-53d0bc432778] Running
	I1028 11:09:31.830593    7696 system_pods.go:61] "kube-scheduler-functional-150200" [e86b3da4-c60d-4a99-8fa9-47e9c5a18934] Running
	I1028 11:09:31.830593    7696 system_pods.go:61] "storage-provisioner" [11f21928-6ded-4c06-ba52-2f346f9fb8b4] Running
	I1028 11:09:31.830593    7696 system_pods.go:74] duration metric: took 143.0919ms to wait for pod list to return data ...
	I1028 11:09:31.830685    7696 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:09:32.018455    7696 request.go:632] Waited for 187.6195ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.250.220:8441/api/v1/namespaces/default/serviceaccounts
	I1028 11:09:32.018455    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/default/serviceaccounts
	I1028 11:09:32.018455    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:32.018455    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:32.018455    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:32.024799    7696 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:09:32.024799    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:32.024936    7696 round_trippers.go:580]     Content-Length: 261
	I1028 11:09:32.024936    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:32 GMT
	I1028 11:09:32.024936    7696 round_trippers.go:580]     Audit-Id: b0c306e3-f35d-4050-814b-1c502b0e34fb
	I1028 11:09:32.024936    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:32.024936    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:32.024936    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:32.024936    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:32.025052    7696 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"583"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"99ce6802-6999-4b71-ace3-81ad1aa55bd0","resourceVersion":"320","creationTimestamp":"2024-10-28T11:07:13Z"}}]}
	I1028 11:09:32.025554    7696 default_sa.go:45] found service account: "default"
	I1028 11:09:32.025554    7696 default_sa.go:55] duration metric: took 194.8669ms for default service account to be created ...
	I1028 11:09:32.025554    7696 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:09:32.218651    7696 request.go:632] Waited for 192.998ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods
	I1028 11:09:32.218651    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/namespaces/kube-system/pods
	I1028 11:09:32.218651    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:32.218651    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:32.218651    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:32.223579    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:32.223688    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:32.223688    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:32 GMT
	I1028 11:09:32.223688    7696 round_trippers.go:580]     Audit-Id: a6f7d49c-2078-43a8-b8fa-f608df89ba47
	I1028 11:09:32.223688    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:32.223688    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:32.223688    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:32.223688    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:32.225268    7696 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"583"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-bbbsr","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2c1be340-9d91-4d11-b776-a17e2a7409d0","resourceVersion":"568","creationTimestamp":"2024-10-28T11:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"97488db9-ee6b-451d-bea8-cb0994714fd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97488db9-ee6b-451d-bea8-cb0994714fd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51250 chars]
	I1028 11:09:32.230138    7696 system_pods.go:86] 7 kube-system pods found
	I1028 11:09:32.230138    7696 system_pods.go:89] "coredns-7c65d6cfc9-bbbsr" [2c1be340-9d91-4d11-b776-a17e2a7409d0] Running
	I1028 11:09:32.230138    7696 system_pods.go:89] "etcd-functional-150200" [deeea244-f2b0-4060-b0c7-c882b8edf88d] Running
	I1028 11:09:32.230138    7696 system_pods.go:89] "kube-apiserver-functional-150200" [91d76a57-02b8-416d-a13d-8d1b3d78c0ca] Running
	I1028 11:09:32.230684    7696 system_pods.go:89] "kube-controller-manager-functional-150200" [74b7db98-fcff-4451-b704-f889d93fec74] Running
	I1028 11:09:32.230684    7696 system_pods.go:89] "kube-proxy-99k8l" [77fef842-5652-4270-ac9e-53d0bc432778] Running
	I1028 11:09:32.230684    7696 system_pods.go:89] "kube-scheduler-functional-150200" [e86b3da4-c60d-4a99-8fa9-47e9c5a18934] Running
	I1028 11:09:32.230684    7696 system_pods.go:89] "storage-provisioner" [11f21928-6ded-4c06-ba52-2f346f9fb8b4] Running
	I1028 11:09:32.230785    7696 system_pods.go:126] duration metric: took 205.0306ms to wait for k8s-apps to be running ...
	I1028 11:09:32.230830    7696 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:09:32.246179    7696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:09:32.277640    7696 system_svc.go:56] duration metric: took 46.8554ms WaitForService to wait for kubelet
	I1028 11:09:32.277640    7696 kubeadm.go:582] duration metric: took 2.9665811s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:09:32.277749    7696 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:09:32.418196    7696 request.go:632] Waited for 140.305ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.250.220:8441/api/v1/nodes
	I1028 11:09:32.418196    7696 round_trippers.go:463] GET https://172.27.250.220:8441/api/v1/nodes
	I1028 11:09:32.418196    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:32.418196    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:32.418196    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:32.422513    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:32.423407    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:32.423407    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:32.423407    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:32.423407    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:32.423407    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:32 GMT
	I1028 11:09:32.423407    7696 round_trippers.go:580]     Audit-Id: 13a60ef4-a20c-4321-856c-432948e5fa59
	I1028 11:09:32.423407    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:32.423687    7696 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"583"},"items":[{"metadata":{"name":"functional-150200","uid":"d4384c7d-c9b3-4e0b-a16a-824516e8c932","resourceVersion":"500","creationTimestamp":"2024-10-28T11:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-150200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"functional-150200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_07_09_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I1028 11:09:32.424164    7696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:09:32.424164    7696 node_conditions.go:123] node cpu capacity is 2
	I1028 11:09:32.424280    7696 node_conditions.go:105] duration metric: took 146.5298ms to run NodePressure ...
	I1028 11:09:32.424280    7696 start.go:241] waiting for startup goroutines ...
	I1028 11:09:34.034448    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:09:34.035290    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:09:34.035463    7696 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:09:34.035494    7696 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:09:34.035609    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
	I1028 11:09:34.075276    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:09:34.075906    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:09:34.075906    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
	I1028 11:09:36.389403    7696 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:09:36.390387    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:09:36.390467    7696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
	I1028 11:09:36.838757    7696 main.go:141] libmachine: [stdout =====>] : 172.27.250.220
	
	I1028 11:09:36.839716    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:09:36.840286    7696 sshutil.go:53] new ssh client: &{IP:172.27.250.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-150200\id_rsa Username:docker}
	I1028 11:09:36.984120    7696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:09:37.882303    7696 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I1028 11:09:37.882379    7696 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I1028 11:09:37.882379    7696 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1028 11:09:37.882448    7696 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1028 11:09:37.882448    7696 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I1028 11:09:37.882499    7696 command_runner.go:130] > pod/storage-provisioner configured
	I1028 11:09:39.080608    7696 main.go:141] libmachine: [stdout =====>] : 172.27.250.220
	
	I1028 11:09:39.081040    7696 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:09:39.081500    7696 sshutil.go:53] new ssh client: &{IP:172.27.250.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-150200\id_rsa Username:docker}
	I1028 11:09:39.234057    7696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:09:39.442061    7696 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I1028 11:09:39.443431    7696 round_trippers.go:463] GET https://172.27.250.220:8441/apis/storage.k8s.io/v1/storageclasses
	I1028 11:09:39.443491    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:39.443491    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:39.443491    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:39.447775    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:39.447929    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:39.447929    7696 round_trippers.go:580]     Content-Length: 1273
	I1028 11:09:39.447929    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:39 GMT
	I1028 11:09:39.447929    7696 round_trippers.go:580]     Audit-Id: 68075a32-51b0-4753-8bea-f8dfd520ee27
	I1028 11:09:39.447929    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:39.447977    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:39.447977    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:39.447977    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:39.447977    7696 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"591"},"items":[{"metadata":{"name":"standard","uid":"b6d20a3e-4090-42d2-a45d-a0caa40de698","resourceVersion":"420","creationTimestamp":"2024-10-28T11:07:23Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-28T11:07:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1028 11:09:39.448544    7696 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6d20a3e-4090-42d2-a45d-a0caa40de698","resourceVersion":"420","creationTimestamp":"2024-10-28T11:07:23Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-28T11:07:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1028 11:09:39.448544    7696 round_trippers.go:463] PUT https://172.27.250.220:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 11:09:39.448544    7696 round_trippers.go:469] Request Headers:
	I1028 11:09:39.448544    7696 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:09:39.448544    7696 round_trippers.go:473]     Content-Type: application/json
	I1028 11:09:39.448544    7696 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:09:39.453132    7696 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:09:39.453770    7696 round_trippers.go:577] Response Headers:
	I1028 11:09:39.453770    7696 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b2be165a-f224-4e00-b0a6-5d7ae222156a
	I1028 11:09:39.453770    7696 round_trippers.go:580]     Content-Length: 1220
	I1028 11:09:39.453770    7696 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:09:39 GMT
	I1028 11:09:39.453858    7696 round_trippers.go:580]     Audit-Id: 56d8256c-d22c-4242-b90f-501d078456b6
	I1028 11:09:39.453858    7696 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:09:39.453858    7696 round_trippers.go:580]     Content-Type: application/json
	I1028 11:09:39.453858    7696 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d3d4e47-cd04-4659-a32b-4e1146049b05
	I1028 11:09:39.453931    7696 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6d20a3e-4090-42d2-a45d-a0caa40de698","resourceVersion":"420","creationTimestamp":"2024-10-28T11:07:23Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-28T11:07:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1028 11:09:39.457236    7696 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 11:09:39.461303    7696 addons.go:510] duration metric: took 10.150239s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 11:09:39.461364    7696 start.go:246] waiting for cluster config update ...
	I1028 11:09:39.461364    7696 start.go:255] writing updated cluster config ...
	I1028 11:09:39.471822    7696 ssh_runner.go:195] Run: rm -f paused
	I1028 11:09:39.622307    7696 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 11:09:39.629095    7696 out.go:177] * Done! kubectl is now configured to use "functional-150200" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 28 11:09:09 functional-150200 dockerd[3903]: time="2024-10-28T11:09:09.890206991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:09:09 functional-150200 dockerd[3903]: time="2024-10-28T11:09:09.890961497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:09:13 functional-150200 cri-dockerd[4182]: time="2024-10-28T11:09:13Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Oct 28 11:09:14 functional-150200 dockerd[3903]: time="2024-10-28T11:09:14.564743461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 11:09:14 functional-150200 dockerd[3903]: time="2024-10-28T11:09:14.564862062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 11:09:14 functional-150200 dockerd[3903]: time="2024-10-28T11:09:14.564912363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:09:14 functional-150200 dockerd[3903]: time="2024-10-28T11:09:14.565738869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:09:14 functional-150200 dockerd[3903]: time="2024-10-28T11:09:14.714392755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 11:09:14 functional-150200 dockerd[3903]: time="2024-10-28T11:09:14.714596756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 11:09:14 functional-150200 dockerd[3903]: time="2024-10-28T11:09:14.714616957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:09:14 functional-150200 dockerd[3903]: time="2024-10-28T11:09:14.714724157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:09:14 functional-150200 dockerd[3903]: time="2024-10-28T11:09:14.732075384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 11:09:14 functional-150200 dockerd[3903]: time="2024-10-28T11:09:14.732197485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 11:09:14 functional-150200 dockerd[3903]: time="2024-10-28T11:09:14.732231385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:09:14 functional-150200 dockerd[3903]: time="2024-10-28T11:09:14.732345586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:09:14 functional-150200 cri-dockerd[4182]: time="2024-10-28T11:09:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/99df31a9ea0536db94a4664f1e3a349aa774f1d097c9dd947bed391efac6eb49/resolv.conf as [nameserver 172.27.240.1]"
	Oct 28 11:09:14 functional-150200 cri-dockerd[4182]: time="2024-10-28T11:09:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/60b626c2334d5f7640b9cd9060552c996fc5d836c839f51cd07986d28da74ee0/resolv.conf as [nameserver 172.27.240.1]"
	Oct 28 11:09:15 functional-150200 dockerd[3903]: time="2024-10-28T11:09:15.066468527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 11:09:15 functional-150200 dockerd[3903]: time="2024-10-28T11:09:15.066547328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 11:09:15 functional-150200 dockerd[3903]: time="2024-10-28T11:09:15.066561228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:09:15 functional-150200 dockerd[3903]: time="2024-10-28T11:09:15.066714929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:09:15 functional-150200 dockerd[3903]: time="2024-10-28T11:09:15.179327652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 11:09:15 functional-150200 dockerd[3903]: time="2024-10-28T11:09:15.179634154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 11:09:15 functional-150200 dockerd[3903]: time="2024-10-28T11:09:15.179731255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:09:15 functional-150200 dockerd[3903]: time="2024-10-28T11:09:15.180143858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3956ede43e688       6e38f40d628db       2 minutes ago       Running             storage-provisioner       1                   60b626c2334d5       storage-provisioner
	df95017886b62       505d571f5fd56       2 minutes ago       Running             kube-proxy                1                   99df31a9ea053       kube-proxy-99k8l
	1eee01fc963d8       c69fa2e9cbf5f       2 minutes ago       Running             coredns                   2                   60287780ed2af       coredns-7c65d6cfc9-bbbsr
	347cec8c0af5f       9499c9960544e       2 minutes ago       Running             kube-apiserver            2                   ef4bbada9847a       kube-apiserver-functional-150200
	182fcd7ea48a9       2e96e5913fc06       2 minutes ago       Running             etcd                      2                   0088ce8378257       etcd-functional-150200
	1396d2d504aa6       0486b6c53a1b5       2 minutes ago       Running             kube-controller-manager   2                   35d19a539b25d       kube-controller-manager-functional-150200
	b931bfe7e87ad       847c7bc1a5418       2 minutes ago       Running             kube-scheduler            2                   f38d7d540759c       kube-scheduler-functional-150200
	3885df2851425       c69fa2e9cbf5f       2 minutes ago       Exited              coredns                   1                   155a86f12ee2d       coredns-7c65d6cfc9-bbbsr
	353c83554b86e       0486b6c53a1b5       2 minutes ago       Exited              kube-controller-manager   1                   001506fa6797a       kube-controller-manager-functional-150200
	9338df5b8f68e       9499c9960544e       2 minutes ago       Exited              kube-apiserver            1                   48bb0201c2a9b       kube-apiserver-functional-150200
	64f1dcb75042f       847c7bc1a5418       2 minutes ago       Exited              kube-scheduler            1                   6b0203449194a       kube-scheduler-functional-150200
	a6838367fab39       2e96e5913fc06       2 minutes ago       Exited              etcd                      1                   ae70347c07057       etcd-functional-150200
	3acaea8ee08e1       6e38f40d628db       4 minutes ago       Exited              storage-provisioner       0                   fe3f13a4911fe       storage-provisioner
	c10fc2a26debc       505d571f5fd56       4 minutes ago       Exited              kube-proxy                0                   8327683de18c1       kube-proxy-99k8l
	
	
	==> coredns [1eee01fc963d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 2b51ff0d1447e64155acb71b08577e006c354174e7f71e1657a628ec295ed4c4ed21fd96e87ab4f293107101195027a9efce2f5a2196d032a75e26ff9450df4e
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33797 - 32528 "HINFO IN 1334808197731967053.3731994158001111697. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.072105326s
	
	
	==> coredns [3885df285142] <==
	
	
	==> describe nodes <==
	Name:               functional-150200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-150200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=functional-150200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T11_07_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:07:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-150200
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:11:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:10:46 +0000   Mon, 28 Oct 2024 11:07:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:10:46 +0000   Mon, 28 Oct 2024 11:07:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:10:46 +0000   Mon, 28 Oct 2024 11:07:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:10:46 +0000   Mon, 28 Oct 2024 11:07:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.250.220
	  Hostname:    functional-150200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 bda8a1157a7149f3b4491f5ec36ec44a
	  System UUID:                8281bbc4-bda0-7d47-b4f8-6f89137b9638
	  Boot ID:                    6ca1dc15-5760-4593-88a9-940fe65ff3e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-bbbsr                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m16s
	  kube-system                 etcd-functional-150200                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m21s
	  kube-system                 kube-apiserver-functional-150200             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-controller-manager-functional-150200    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-proxy-99k8l                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-scheduler-functional-150200             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m15s                  kube-proxy       
	  Normal  Starting                 4m14s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     4m29s (x7 over 4m30s)  kubelet          Node functional-150200 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m29s (x8 over 4m30s)  kubelet          Node functional-150200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m29s (x8 over 4m30s)  kubelet          Node functional-150200 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m22s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     4m21s                  kubelet          Node functional-150200 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    4m21s                  kubelet          Node functional-150200 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                4m21s                  kubelet          Node functional-150200 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  4m21s                  kubelet          Node functional-150200 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           4m17s                  node-controller  Node functional-150200 event: Registered Node functional-150200 in Controller
	  Normal  Starting                 2m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m21s (x8 over 2m21s)  kubelet          Node functional-150200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s (x8 over 2m21s)  kubelet          Node functional-150200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s (x7 over 2m21s)  kubelet          Node functional-150200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m14s                  node-controller  Node functional-150200 event: Registered Node functional-150200 in Controller
	
	
	==> dmesg <==
	[  +5.030013] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.804500] systemd-fstab-generator[1670]: Ignoring "noauto" option for root device
	[Oct28 11:07] systemd-fstab-generator[1822]: Ignoring "noauto" option for root device
	[  +0.114499] kauditd_printk_skb: 48 callbacks suppressed
	[  +8.029990] systemd-fstab-generator[2217]: Ignoring "noauto" option for root device
	[  +0.156939] kauditd_printk_skb: 62 callbacks suppressed
	[  +4.940346] systemd-fstab-generator[2329]: Ignoring "noauto" option for root device
	[  +0.187247] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.411933] kauditd_printk_skb: 69 callbacks suppressed
	[Oct28 11:08] systemd-fstab-generator[3434]: Ignoring "noauto" option for root device
	[  +0.684133] systemd-fstab-generator[3470]: Ignoring "noauto" option for root device
	[  +0.282989] systemd-fstab-generator[3482]: Ignoring "noauto" option for root device
	[  +0.304561] systemd-fstab-generator[3497]: Ignoring "noauto" option for root device
	[  +5.361311] kauditd_printk_skb: 89 callbacks suppressed
	[Oct28 11:09] systemd-fstab-generator[4131]: Ignoring "noauto" option for root device
	[  +0.216935] systemd-fstab-generator[4144]: Ignoring "noauto" option for root device
	[  +0.223758] systemd-fstab-generator[4155]: Ignoring "noauto" option for root device
	[  +0.312510] systemd-fstab-generator[4170]: Ignoring "noauto" option for root device
	[  +0.982943] systemd-fstab-generator[4346]: Ignoring "noauto" option for root device
	[  +0.316920] kauditd_printk_skb: 137 callbacks suppressed
	[  +5.732949] systemd-fstab-generator[5537]: Ignoring "noauto" option for root device
	[  +0.150775] kauditd_printk_skb: 94 callbacks suppressed
	[  +5.834802] kauditd_printk_skb: 32 callbacks suppressed
	[ +14.781737] systemd-fstab-generator[6155]: Ignoring "noauto" option for root device
	[  +0.189617] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [182fcd7ea48a] <==
	{"level":"info","ts":"2024-10-28T11:09:10.434745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81ece8b0e06473ca switched to configuration voters=(9362113571772986314)"}
	{"level":"info","ts":"2024-10-28T11:09:10.434799Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b66baf6276674f82","local-member-id":"81ece8b0e06473ca","added-peer-id":"81ece8b0e06473ca","added-peer-peer-urls":["https://172.27.250.220:2380"]}
	{"level":"info","ts":"2024-10-28T11:09:10.434881Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b66baf6276674f82","local-member-id":"81ece8b0e06473ca","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T11:09:10.434909Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T11:09:10.438205Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T11:09:10.438686Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"81ece8b0e06473ca","initial-advertise-peer-urls":["https://172.27.250.220:2380"],"listen-peer-urls":["https://172.27.250.220:2380"],"advertise-client-urls":["https://172.27.250.220:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.250.220:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T11:09:10.438910Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T11:09:10.439361Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.27.250.220:2380"}
	{"level":"info","ts":"2024-10-28T11:09:10.439737Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.27.250.220:2380"}
	{"level":"info","ts":"2024-10-28T11:09:11.974256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81ece8b0e06473ca is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-28T11:09:11.974306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81ece8b0e06473ca became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-28T11:09:11.974391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81ece8b0e06473ca received MsgPreVoteResp from 81ece8b0e06473ca at term 2"}
	{"level":"info","ts":"2024-10-28T11:09:11.974417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81ece8b0e06473ca became candidate at term 3"}
	{"level":"info","ts":"2024-10-28T11:09:11.974426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81ece8b0e06473ca received MsgVoteResp from 81ece8b0e06473ca at term 3"}
	{"level":"info","ts":"2024-10-28T11:09:11.974461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81ece8b0e06473ca became leader at term 3"}
	{"level":"info","ts":"2024-10-28T11:09:11.974474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 81ece8b0e06473ca elected leader 81ece8b0e06473ca at term 3"}
	{"level":"info","ts":"2024-10-28T11:09:11.989243Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"81ece8b0e06473ca","local-member-attributes":"{Name:functional-150200 ClientURLs:[https://172.27.250.220:2379]}","request-path":"/0/members/81ece8b0e06473ca/attributes","cluster-id":"b66baf6276674f82","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T11:09:11.989245Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T11:09:11.989567Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T11:09:11.990511Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T11:09:11.991208Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T11:09:11.990553Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T11:09:11.991801Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T11:09:11.992906Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T11:09:11.992928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.250.220:2379"}
	
	
	==> etcd [a6838367fab3] <==
	{"level":"warn","ts":"2024-10-28T11:09:04.535938Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-10-28T11:09:04.536244Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.27.250.220:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.27.250.220:2380","--initial-cluster=functional-150200=https://172.27.250.220:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.27.250.220:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.27.250.220:2380","--name=functional-150200","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=1000
0","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-10-28T11:09:04.536358Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-10-28T11:09:04.536389Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-10-28T11:09:04.536430Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.27.250.220:2380"]}
	{"level":"info","ts":"2024-10-28T11:09:04.536556Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T11:09:04.540474Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.27.250.220:2379"]}
	{"level":"info","ts":"2024-10-28T11:09:04.540636Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-150200","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.27.250.220:2380"],"listen-peer-urls":["https://172.27.250.220:2380"],"advertise-client-urls":["https://172.27.250.220:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.250.220:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initi
al-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-10-28T11:09:04.601266Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"51.316711ms"}
	{"level":"info","ts":"2024-10-28T11:09:04.658406Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-10-28T11:09:04.710310Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"b66baf6276674f82","local-member-id":"81ece8b0e06473ca","commit-index":526}
	{"level":"info","ts":"2024-10-28T11:09:04.710794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81ece8b0e06473ca switched to configuration voters=()"}
	{"level":"info","ts":"2024-10-28T11:09:04.710831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81ece8b0e06473ca became follower at term 2"}
	{"level":"info","ts":"2024-10-28T11:09:04.710854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 81ece8b0e06473ca [peers: [], term: 2, commit: 526, applied: 0, lastindex: 526, lastterm: 2]"}
	{"level":"warn","ts":"2024-10-28T11:09:04.762934Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-10-28T11:09:04.869870Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":494}
	{"level":"info","ts":"2024-10-28T11:09:04.904946Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-10-28T11:09:04.921047Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"81ece8b0e06473ca","timeout":"7s"}
	{"level":"info","ts":"2024-10-28T11:09:04.926261Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"81ece8b0e06473ca"}
	{"level":"info","ts":"2024-10-28T11:09:04.926304Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"81ece8b0e06473ca","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-28T11:09:04.942510Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-28T11:09:04.942920Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	
	
	==> kernel <==
	 11:11:30 up 6 min,  0 users,  load average: 0.38, 0.56, 0.27
	Linux functional-150200 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [347cec8c0af5] <==
	I1028 11:09:13.714723       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1028 11:09:13.719235       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1028 11:09:13.719653       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1028 11:09:13.720146       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1028 11:09:13.722932       1 shared_informer.go:320] Caches are synced for configmaps
	I1028 11:09:13.723244       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1028 11:09:13.723438       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1028 11:09:13.723720       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1028 11:09:13.723737       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1028 11:09:13.725155       1 aggregator.go:171] initial CRD sync complete...
	I1028 11:09:13.725380       1 autoregister_controller.go:144] Starting autoregister controller
	I1028 11:09:13.725766       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1028 11:09:13.726060       1 cache.go:39] Caches are synced for autoregister controller
	I1028 11:09:13.736872       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1028 11:09:13.736914       1 policy_source.go:224] refreshing policies
	E1028 11:09:13.737116       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1028 11:09:13.753302       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1028 11:09:14.520243       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1028 11:09:15.931426       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 11:09:15.986828       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 11:09:16.116378       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 11:09:16.178714       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1028 11:09:16.190690       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1028 11:09:16.983320       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1028 11:09:17.269279       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [9338df5b8f68] <==
	I1028 11:09:04.791597       1 options.go:228] external host was not specified, using 172.27.250.220
	I1028 11:09:04.795229       1 server.go:142] Version: v1.31.2
	I1028 11:09:04.795270       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:09:05.975932       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W1028 11:09:05.979389       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:09:05.979478       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1028 11:09:05.983992       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1028 11:09:05.989339       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1028 11:09:05.989420       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1028 11:09:05.989775       1 instance.go:232] Using reconciler: lease
	W1028 11:09:05.991085       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1396d2d504aa] <==
	I1028 11:09:16.977473       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-150200"
	I1028 11:09:16.977877       1 shared_informer.go:320] Caches are synced for PVC protection
	I1028 11:09:16.982127       1 shared_informer.go:320] Caches are synced for job
	I1028 11:09:16.988299       1 shared_informer.go:320] Caches are synced for taint
	I1028 11:09:16.988561       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1028 11:09:16.989289       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-150200"
	I1028 11:09:16.989441       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1028 11:09:16.992644       1 shared_informer.go:320] Caches are synced for disruption
	I1028 11:09:16.996042       1 shared_informer.go:320] Caches are synced for daemon sets
	I1028 11:09:17.000234       1 shared_informer.go:320] Caches are synced for expand
	I1028 11:09:17.002100       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1028 11:09:17.024943       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1028 11:09:17.044530       1 shared_informer.go:320] Caches are synced for HPA
	I1028 11:09:17.077016       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 11:09:17.173829       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 11:09:17.215530       1 shared_informer.go:320] Caches are synced for attach detach
	I1028 11:09:17.406602       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="381.217384ms"
	I1028 11:09:17.406737       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.1µs"
	I1028 11:09:17.627942       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 11:09:17.666694       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 11:09:17.666910       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1028 11:09:18.558225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="19.650743ms"
	I1028 11:09:18.559407       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="108.701µs"
	I1028 11:10:15.448369       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-150200"
	I1028 11:10:46.051412       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-150200"
	
	
	==> kube-controller-manager [353c83554b86] <==
	
	
	==> kube-proxy [c10fc2a26deb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 11:07:15.687876       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 11:07:15.707710       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.27.250.220"]
	E1028 11:07:15.707791       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:07:15.829884       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 11:07:15.830071       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 11:07:15.830108       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:07:15.839098       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 11:07:15.839689       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:07:15.839879       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:07:15.841801       1 config.go:199] "Starting service config controller"
	I1028 11:07:15.841920       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:07:15.842152       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:07:15.842298       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:07:15.842941       1 config.go:328] "Starting node config controller"
	I1028 11:07:15.847120       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:07:15.942229       1 shared_informer.go:320] Caches are synced for service config
	I1028 11:07:15.942552       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 11:07:15.947571       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [df95017886b6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 11:09:15.395032       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 11:09:15.407166       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.27.250.220"]
	E1028 11:09:15.407239       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:09:15.455940       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 11:09:15.456120       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 11:09:15.456150       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:09:15.460567       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 11:09:15.461571       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:09:15.461590       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:09:15.463930       1 config.go:199] "Starting service config controller"
	I1028 11:09:15.464259       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:09:15.464464       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:09:15.464623       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:09:15.465265       1 config.go:328] "Starting node config controller"
	I1028 11:09:15.465363       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:09:15.565172       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 11:09:15.565233       1 shared_informer.go:320] Caches are synced for service config
	I1028 11:09:15.565730       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [64f1dcb75042] <==
	
	
	==> kube-scheduler [b931bfe7e87a] <==
	I1028 11:09:11.155216       1 serving.go:386] Generated self-signed cert in-memory
	W1028 11:09:13.590462       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 11:09:13.590612       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 11:09:13.590694       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 11:09:13.590745       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 11:09:13.687833       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 11:09:13.689053       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:09:13.692443       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 11:09:13.692559       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 11:09:13.692938       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 11:09:13.693220       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 11:09:13.793120       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 11:09:10 functional-150200 kubelet[5544]: E1028 11:09:10.303202    5544 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.27.250.220:8441: connect: connection refused" logger="UnhandledError"
	Oct 28 11:09:10 functional-150200 kubelet[5544]: I1028 11:09:10.679401    5544 kubelet_node_status.go:72] "Attempting to register node" node="functional-150200"
	Oct 28 11:09:13 functional-150200 kubelet[5544]: I1028 11:09:13.816279    5544 kubelet_node_status.go:111] "Node was previously registered" node="functional-150200"
	Oct 28 11:09:13 functional-150200 kubelet[5544]: I1028 11:09:13.816880    5544 kubelet_node_status.go:75] "Successfully registered node" node="functional-150200"
	Oct 28 11:09:13 functional-150200 kubelet[5544]: I1028 11:09:13.817145    5544 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 28 11:09:13 functional-150200 kubelet[5544]: I1028 11:09:13.818886    5544 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 28 11:09:13 functional-150200 kubelet[5544]: I1028 11:09:13.979043    5544 apiserver.go:52] "Watching apiserver"
	Oct 28 11:09:14 functional-150200 kubelet[5544]: I1028 11:09:14.046931    5544 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 28 11:09:14 functional-150200 kubelet[5544]: I1028 11:09:14.099648    5544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77fef842-5652-4270-ac9e-53d0bc432778-xtables-lock\") pod \"kube-proxy-99k8l\" (UID: \"77fef842-5652-4270-ac9e-53d0bc432778\") " pod="kube-system/kube-proxy-99k8l"
	Oct 28 11:09:14 functional-150200 kubelet[5544]: I1028 11:09:14.100029    5544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77fef842-5652-4270-ac9e-53d0bc432778-lib-modules\") pod \"kube-proxy-99k8l\" (UID: \"77fef842-5652-4270-ac9e-53d0bc432778\") " pod="kube-system/kube-proxy-99k8l"
	Oct 28 11:09:14 functional-150200 kubelet[5544]: I1028 11:09:14.100174    5544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/11f21928-6ded-4c06-ba52-2f346f9fb8b4-tmp\") pod \"storage-provisioner\" (UID: \"11f21928-6ded-4c06-ba52-2f346f9fb8b4\") " pod="kube-system/storage-provisioner"
	Oct 28 11:09:14 functional-150200 kubelet[5544]: I1028 11:09:14.294742    5544 scope.go:117] "RemoveContainer" containerID="3885df28514258d366b255ff71e59b89cd95a0d8e767ce14d3591d2293445166"
	Oct 28 11:09:14 functional-150200 kubelet[5544]: I1028 11:09:14.946884    5544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60b626c2334d5f7640b9cd9060552c996fc5d836c839f51cd07986d28da74ee0"
	Oct 28 11:09:17 functional-150200 kubelet[5544]: I1028 11:09:17.046466    5544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 28 11:09:18 functional-150200 kubelet[5544]: I1028 11:09:18.510125    5544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 28 11:10:09 functional-150200 kubelet[5544]: E1028 11:10:09.147474    5544 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:10:09 functional-150200 kubelet[5544]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:10:09 functional-150200 kubelet[5544]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:10:09 functional-150200 kubelet[5544]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:10:09 functional-150200 kubelet[5544]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:11:09 functional-150200 kubelet[5544]: E1028 11:11:09.152479    5544 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:11:09 functional-150200 kubelet[5544]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:11:09 functional-150200 kubelet[5544]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:11:09 functional-150200 kubelet[5544]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:11:09 functional-150200 kubelet[5544]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [3956ede43e68] <==
	I1028 11:09:15.324378       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 11:09:15.343829       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 11:09:15.344128       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 11:09:32.757461       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 11:09:32.757888       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-150200_26d52ff2-9c76-4adc-a909-31e2661d93dc!
	I1028 11:09:32.757886       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2715bcf7-d8c1-447d-aaaa-64133b42c5a3", APIVersion:"v1", ResourceVersion:"584", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-150200_26d52ff2-9c76-4adc-a909-31e2661d93dc became leader
	I1028 11:09:32.859464       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-150200_26d52ff2-9c76-4adc-a909-31e2661d93dc!
	
	
	==> storage-provisioner [3acaea8ee08e] <==
	I1028 11:07:22.537280       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 11:07:22.550291       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 11:07:22.550528       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 11:07:22.568308       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 11:07:22.569508       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-150200_2be29f75-b0f0-48ef-b961-d84218b41998!
	I1028 11:07:22.569255       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2715bcf7-d8c1-447d-aaaa-64133b42c5a3", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-150200_2be29f75-b0f0-48ef-b961-d84218b41998 became leader
	I1028 11:07:22.670222       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-150200_2be29f75-b0f0-48ef-b961-d84218b41998!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-150200 -n functional-150200
E1028 11:11:39.669890    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-150200 -n functional-150200: (12.7915101s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-150200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (36.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-150200 service --namespace=default --https --url hello-node: exit status 1 (15.0157901s)
functional_test.go:1511: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-150200 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-150200 service hello-node --url --format={{.IP}}: exit status 1 (15.0122383s)
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-150200 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1548: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-150200 service hello-node --url: exit status 1 (15.0118688s)
functional_test.go:1561: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-150200 service hello-node --url": exit status 1
functional_test.go:1565: found endpoint for hello-node: 
functional_test.go:1573: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (71.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-b84wl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-b84wl -- sh -c "ping -c 1 172.27.240.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-b84wl -- sh -c "ping -c 1 172.27.240.1": exit status 1 (10.5611693s)

                                                
                                                
-- stdout --
	PING 172.27.240.1 (172.27.240.1): 56 data bytes
	
	--- 172.27.240.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.27.240.1) from pod (busybox-7dff88458-b84wl): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-cvthb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-cvthb -- sh -c "ping -c 1 172.27.240.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-cvthb -- sh -c "ping -c 1 172.27.240.1": exit status 1 (10.5494893s)

                                                
                                                
-- stdout --
	PING 172.27.240.1 (172.27.240.1): 56 data bytes
	
	--- 172.27.240.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.27.240.1) from pod (busybox-7dff88458-cvthb): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-gp9fd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-gp9fd -- sh -c "ping -c 1 172.27.240.1"
E1028 11:36:39.686847    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-gp9fd -- sh -c "ping -c 1 172.27.240.1": exit status 1 (10.5372163s)

                                                
                                                
-- stdout --
	PING 172.27.240.1 (172.27.240.1): 56 data bytes
	
	--- 172.27.240.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.27.240.1) from pod (busybox-7dff88458-gp9fd): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-201400 -n ha-201400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-201400 -n ha-201400: (13.043692s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 logs -n 25: (9.4301s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| update-context | functional-150200                    | functional-150200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:18 UTC | 28 Oct 24 11:18 UTC |
	|                | update-context                       |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2               |                   |                   |         |                     |                     |
	| update-context | functional-150200                    | functional-150200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:18 UTC | 28 Oct 24 11:18 UTC |
	|                | update-context                       |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2               |                   |                   |         |                     |                     |
	| image          | functional-150200 image ls           | functional-150200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:18 UTC | 28 Oct 24 11:18 UTC |
	| delete         | -p functional-150200                 | functional-150200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:22 UTC | 28 Oct 24 11:23 UTC |
	| start          | -p ha-201400 --wait=true             | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:23 UTC | 28 Oct 24 11:35 UTC |
	|                | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|                | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|                | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- apply -f             | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:35 UTC | 28 Oct 24 11:35 UTC |
	|                | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- rollout status       | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:35 UTC | 28 Oct 24 11:36 UTC |
	|                | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- get pods -o          | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC | 28 Oct 24 11:36 UTC |
	|                | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- get pods -o          | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC | 28 Oct 24 11:36 UTC |
	|                | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- exec                 | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC | 28 Oct 24 11:36 UTC |
	|                | busybox-7dff88458-b84wl --           |                   |                   |         |                     |                     |
	|                | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- exec                 | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC | 28 Oct 24 11:36 UTC |
	|                | busybox-7dff88458-cvthb --           |                   |                   |         |                     |                     |
	|                | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- exec                 | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC | 28 Oct 24 11:36 UTC |
	|                | busybox-7dff88458-gp9fd --           |                   |                   |         |                     |                     |
	|                | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- exec                 | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC | 28 Oct 24 11:36 UTC |
	|                | busybox-7dff88458-b84wl --           |                   |                   |         |                     |                     |
	|                | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- exec                 | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC | 28 Oct 24 11:36 UTC |
	|                | busybox-7dff88458-cvthb --           |                   |                   |         |                     |                     |
	|                | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- exec                 | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC | 28 Oct 24 11:36 UTC |
	|                | busybox-7dff88458-gp9fd --           |                   |                   |         |                     |                     |
	|                | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- exec                 | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC | 28 Oct 24 11:36 UTC |
	|                | busybox-7dff88458-b84wl -- nslookup  |                   |                   |         |                     |                     |
	|                | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- exec                 | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC | 28 Oct 24 11:36 UTC |
	|                | busybox-7dff88458-cvthb -- nslookup  |                   |                   |         |                     |                     |
	|                | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- exec                 | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC | 28 Oct 24 11:36 UTC |
	|                | busybox-7dff88458-gp9fd -- nslookup  |                   |                   |         |                     |                     |
	|                | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- get pods -o          | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC | 28 Oct 24 11:36 UTC |
	|                | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- exec                 | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC | 28 Oct 24 11:36 UTC |
	|                | busybox-7dff88458-b84wl              |                   |                   |         |                     |                     |
	|                | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|                | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|                | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- exec                 | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC |                     |
	|                | busybox-7dff88458-b84wl -- sh        |                   |                   |         |                     |                     |
	|                | -c ping -c 1 172.27.240.1            |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- exec                 | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC | 28 Oct 24 11:36 UTC |
	|                | busybox-7dff88458-cvthb              |                   |                   |         |                     |                     |
	|                | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|                | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|                | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- exec                 | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC |                     |
	|                | busybox-7dff88458-cvthb -- sh        |                   |                   |         |                     |                     |
	|                | -c ping -c 1 172.27.240.1            |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- exec                 | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC | 28 Oct 24 11:36 UTC |
	|                | busybox-7dff88458-gp9fd              |                   |                   |         |                     |                     |
	|                | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|                | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|                | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl        | -p ha-201400 -- exec                 | ha-201400         | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:36 UTC |                     |
	|                | busybox-7dff88458-gp9fd -- sh        |                   |                   |         |                     |                     |
	|                | -c ping -c 1 172.27.240.1            |                   |                   |         |                     |                     |
	|----------------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:23:24
	Running on machine: minikube6
	Binary: Built with gc go1.23.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:23:23.945177    3404 out.go:345] Setting OutFile to fd 1420 ...
	I1028 11:23:24.025125    3404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:23:24.025125    3404 out.go:358] Setting ErrFile to fd 1632...
	I1028 11:23:24.025125    3404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:23:24.053744    3404 out.go:352] Setting JSON to false
	I1028 11:23:24.056741    3404 start.go:129] hostinfo: {"hostname":"minikube6","uptime":162429,"bootTime":1729952174,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W1028 11:23:24.056741    3404 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 11:23:24.065808    3404 out.go:177] * [ha-201400] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1028 11:23:24.072065    3404 notify.go:220] Checking for updates...
	I1028 11:23:24.074260    3404 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 11:23:24.079186    3404 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:23:24.082394    3404 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I1028 11:23:24.084903    3404 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:23:24.087428    3404 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:23:24.091365    3404 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:23:29.788175    3404 out.go:177] * Using the hyperv driver based on user configuration
	I1028 11:23:29.792145    3404 start.go:297] selected driver: hyperv
	I1028 11:23:29.792183    3404 start.go:901] validating driver "hyperv" against <nil>
	I1028 11:23:29.792264    3404 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:23:29.844191    3404 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 11:23:29.846138    3404 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:23:29.846138    3404 cni.go:84] Creating CNI manager for ""
	I1028 11:23:29.846138    3404 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 11:23:29.846138    3404 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 11:23:29.846138    3404 start.go:340] cluster config:
	{Name:ha-201400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-201400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:23:29.846138    3404 iso.go:125] acquiring lock: {Name:mk92685f18db3b9b8a4c28d2a00efac35439147d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:23:29.850377    3404 out.go:177] * Starting "ha-201400" primary control-plane node in "ha-201400" cluster
	I1028 11:23:29.853474    3404 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 11:23:29.853474    3404 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1028 11:23:29.854170    3404 cache.go:56] Caching tarball of preloaded images
	I1028 11:23:29.854314    3404 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1028 11:23:29.854314    3404 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 11:23:29.854967    3404 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json ...
	I1028 11:23:29.855532    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json: {Name:mkec662da8c9b8a5bcca6963febe40e58918464d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:23:29.855760    3404 start.go:360] acquireMachinesLock for ha-201400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:23:29.856765    3404 start.go:364] duration metric: took 1.0052ms to acquireMachinesLock for "ha-201400"
	I1028 11:23:29.856765    3404 start.go:93] Provisioning new machine with config: &{Name:ha-201400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.2 ClusterName:ha-201400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:23:29.856765    3404 start.go:125] createHost starting for "" (driver="hyperv")
	I1028 11:23:29.858999    3404 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:23:29.860001    3404 start.go:159] libmachine.API.Create for "ha-201400" (driver="hyperv")
	I1028 11:23:29.860001    3404 client.go:168] LocalClient.Create starting
	I1028 11:23:29.860001    3404 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I1028 11:23:29.860769    3404 main.go:141] libmachine: Decoding PEM data...
	I1028 11:23:29.860769    3404 main.go:141] libmachine: Parsing certificate...
	I1028 11:23:29.860988    3404 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I1028 11:23:29.861141    3404 main.go:141] libmachine: Decoding PEM data...
	I1028 11:23:29.861141    3404 main.go:141] libmachine: Parsing certificate...
	I1028 11:23:29.861141    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1028 11:23:32.050169    3404 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1028 11:23:32.050169    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:32.050288    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1028 11:23:33.890385    3404 main.go:141] libmachine: [stdout =====>] : False
	
	I1028 11:23:33.890385    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:33.890667    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1028 11:23:35.487087    3404 main.go:141] libmachine: [stdout =====>] : True
	
	I1028 11:23:35.487087    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:35.487087    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1028 11:23:39.390943    3404 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1028 11:23:39.390943    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:39.394704    3404 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:23:39.909693    3404 main.go:141] libmachine: Creating SSH key...
	I1028 11:23:40.215866    3404 main.go:141] libmachine: Creating VM...
	I1028 11:23:40.215866    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1028 11:23:43.289477    3404 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1028 11:23:43.290589    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:43.290589    3404 main.go:141] libmachine: Using switch "Default Switch"
	I1028 11:23:43.290589    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1028 11:23:45.149187    3404 main.go:141] libmachine: [stdout =====>] : True
	
	I1028 11:23:45.149187    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:45.149187    3404 main.go:141] libmachine: Creating VHD
	I1028 11:23:45.150149    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\fixed.vhd' -SizeBytes 10MB -Fixed
	I1028 11:23:48.979903    3404 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F71B427F-95EF-46C6-BB3D-C741D8705557
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1028 11:23:48.980994    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:48.981047    3404 main.go:141] libmachine: Writing magic tar header
	I1028 11:23:48.981047    3404 main.go:141] libmachine: Writing SSH key tar header
	I1028 11:23:48.991865    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\disk.vhd' -VHDType Dynamic -DeleteSource
	I1028 11:23:52.232922    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:23:52.232979    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:52.232979    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\disk.vhd' -SizeBytes 20000MB
	I1028 11:23:54.858270    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:23:54.858270    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:54.858545    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-201400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1028 11:23:58.670169    3404 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-201400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1028 11:23:58.670553    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:58.670633    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-201400 -DynamicMemoryEnabled $false
	I1028 11:24:01.012735    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:01.012735    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:01.013644    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-201400 -Count 2
	I1028 11:24:03.328467    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:03.329221    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:03.329339    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-201400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\boot2docker.iso'
	I1028 11:24:06.018951    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:06.018951    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:06.018951    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-201400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\disk.vhd'
	I1028 11:24:08.783783    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:08.783783    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:08.783783    3404 main.go:141] libmachine: Starting VM...
	I1028 11:24:08.783783    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-201400
	I1028 11:24:11.994076    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:11.994076    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:11.994076    3404 main.go:141] libmachine: Waiting for host to start...
	I1028 11:24:11.994076    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:14.383093    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:14.383134    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:14.383218    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:24:17.020640    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:17.020640    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:18.022370    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:20.388698    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:20.388698    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:20.389384    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:24:23.040590    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:23.040590    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:24.041561    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:26.388718    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:26.388718    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:26.389800    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:24:29.030374    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:29.030591    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:30.031010    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:32.334668    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:32.335595    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:32.335595    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:24:35.022501    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:35.022501    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:36.023084    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:38.342495    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:38.342495    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:38.342495    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:24:41.068455    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:24:41.068455    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:41.068455    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:43.306699    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:43.307581    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:43.307687    3404 machine.go:93] provisionDockerMachine start ...
	I1028 11:24:43.307687    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:45.586564    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:45.586641    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:45.586641    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:24:48.300822    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:24:48.300892    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:48.306670    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:24:48.319711    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.248.250 22 <nil> <nil>}
	I1028 11:24:48.319711    3404 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 11:24:48.448312    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 11:24:48.448312    3404 buildroot.go:166] provisioning hostname "ha-201400"
	I1028 11:24:48.448312    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:50.695910    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:50.696477    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:50.696638    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:24:53.363606    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:24:53.363707    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:53.370925    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:24:53.371631    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.248.250 22 <nil> <nil>}
	I1028 11:24:53.371631    3404 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-201400 && echo "ha-201400" | sudo tee /etc/hostname
	I1028 11:24:53.522931    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-201400
	
	I1028 11:24:53.522931    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:55.735125    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:55.735125    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:55.735327    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:24:58.399484    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:24:58.399484    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:58.406211    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:24:58.406784    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.248.250 22 <nil> <nil>}
	I1028 11:24:58.406910    3404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-201400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-201400/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-201400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:24:58.542079    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:24:58.542079    3404 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I1028 11:24:58.542079    3404 buildroot.go:174] setting up certificates
	I1028 11:24:58.542079    3404 provision.go:84] configureAuth start
	I1028 11:24:58.542079    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:00.773961    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:00.773961    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:00.773961    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:03.411510    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:03.411510    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:03.411982    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:05.620663    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:05.620663    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:05.620748    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:08.267402    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:08.268418    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:08.268523    3404 provision.go:143] copyHostCerts
	I1028 11:25:08.268701    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I1028 11:25:08.269074    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I1028 11:25:08.269074    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I1028 11:25:08.269555    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I1028 11:25:08.270975    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I1028 11:25:08.271524    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I1028 11:25:08.271524    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I1028 11:25:08.271880    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1028 11:25:08.272805    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I1028 11:25:08.273095    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I1028 11:25:08.273171    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I1028 11:25:08.273427    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1028 11:25:08.274724    3404 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-201400 san=[127.0.0.1 172.27.248.250 ha-201400 localhost minikube]
	I1028 11:25:08.408133    3404 provision.go:177] copyRemoteCerts
	I1028 11:25:08.421118    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:25:08.421118    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:10.618156    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:10.618156    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:10.618272    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:13.271182    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:13.271182    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:13.271923    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:25:13.375426    3404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9541188s)
	I1028 11:25:13.375426    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1028 11:25:13.376020    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:25:13.440790    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1028 11:25:13.440966    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:25:13.491081    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1028 11:25:13.491440    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I1028 11:25:13.550743    3404 provision.go:87] duration metric: took 15.0084107s to configureAuth
	I1028 11:25:13.550743    3404 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:25:13.551430    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:25:13.551430    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:15.759031    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:15.759031    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:15.759127    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:18.394001    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:18.394001    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:18.400532    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:25:18.401259    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.248.250 22 <nil> <nil>}
	I1028 11:25:18.401259    3404 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 11:25:18.521445    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 11:25:18.521445    3404 buildroot.go:70] root file system type: tmpfs
	I1028 11:25:18.521445    3404 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 11:25:18.522064    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:20.781489    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:20.781489    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:20.781835    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:23.418009    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:23.418009    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:23.424631    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:25:23.424703    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.248.250 22 <nil> <nil>}
	I1028 11:25:23.425287    3404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 11:25:23.581281    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 11:25:23.581824    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:25.784576    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:25.784680    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:25.784747    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:28.473715    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:28.474572    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:28.480729    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:25:28.481287    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.248.250 22 <nil> <nil>}
	I1028 11:25:28.481287    3404 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 11:25:30.779863    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1028 11:25:30.779942    3404 machine.go:96] duration metric: took 47.4717185s to provisionDockerMachine
	I1028 11:25:30.780006    3404 client.go:171] duration metric: took 2m0.9185746s to LocalClient.Create
	I1028 11:25:30.780006    3404 start.go:167] duration metric: took 2m0.9186379s to libmachine.API.Create "ha-201400"
	I1028 11:25:30.780082    3404 start.go:293] postStartSetup for "ha-201400" (driver="hyperv")
	I1028 11:25:30.780082    3404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:25:30.793085    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:25:30.793085    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:33.024287    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:33.024287    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:33.024287    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:35.723875    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:35.723875    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:35.725281    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:25:35.832422    3404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0392802s)
	I1028 11:25:35.844100    3404 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:25:35.851529    3404 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:25:35.851654    3404 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I1028 11:25:35.851949    3404 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I1028 11:25:35.853091    3404 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> 96082.pem in /etc/ssl/certs
	I1028 11:25:35.853091    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /etc/ssl/certs/96082.pem
	I1028 11:25:35.865291    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:25:35.885900    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /etc/ssl/certs/96082.pem (1708 bytes)
	I1028 11:25:35.935921    3404 start.go:296] duration metric: took 5.1557809s for postStartSetup
	I1028 11:25:35.939827    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:38.203571    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:38.204124    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:38.204206    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:40.859263    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:40.859263    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:40.859263    3404 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json ...
	I1028 11:25:40.862661    3404 start.go:128] duration metric: took 2m11.0042909s to createHost
	I1028 11:25:40.862819    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:43.112867    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:43.113113    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:43.113113    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:45.850010    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:45.850010    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:45.855986    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:25:45.856863    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.248.250 22 <nil> <nil>}
	I1028 11:25:45.856941    3404 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:25:45.982599    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730114745.994965636
	
	I1028 11:25:45.982599    3404 fix.go:216] guest clock: 1730114745.994965636
	I1028 11:25:45.982599    3404 fix.go:229] Guest: 2024-10-28 11:25:45.994965636 +0000 UTC Remote: 2024-10-28 11:25:40.8626619 +0000 UTC m=+137.016616101 (delta=5.132303736s)
	I1028 11:25:45.982599    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:48.277991    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:48.277991    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:48.278927    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:50.980005    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:50.980477    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:50.986024    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:25:50.986664    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.248.250 22 <nil> <nil>}
	I1028 11:25:50.986664    3404 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1730114745
	I1028 11:25:51.136121    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 28 11:25:45 UTC 2024
	
	I1028 11:25:51.136121    3404 fix.go:236] clock set: Mon Oct 28 11:25:45 UTC 2024
	 (err=<nil>)
	I1028 11:25:51.136121    3404 start.go:83] releasing machines lock for "ha-201400", held for 2m21.2777595s
	I1028 11:25:51.136121    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:53.372266    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:53.372266    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:53.372266    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:56.020790    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:56.020790    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:56.026784    3404 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1028 11:25:56.026942    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:56.036676    3404 ssh_runner.go:195] Run: cat /version.json
	I1028 11:25:56.037227    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:58.308577    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:58.308577    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:58.308577    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:58.320156    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:58.320156    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:58.320156    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:26:01.087448    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:26:01.087448    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:01.087986    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:26:01.148274    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:26:01.148424    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:01.148424    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:26:01.179724    3404 ssh_runner.go:235] Completed: cat /version.json: (5.14299s)
	I1028 11:26:01.192035    3404 ssh_runner.go:195] Run: systemctl --version
	I1028 11:26:01.198312    3404 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1713908s)
	W1028 11:26:01.198312    3404 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1028 11:26:01.215361    3404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:26:01.226050    3404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:26:01.237961    3404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:26:01.271296    3404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:26:01.271355    3404 start.go:495] detecting cgroup driver to use...
	I1028 11:26:01.271406    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1028 11:26:01.302739    3404 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1028 11:26:01.302739    3404 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1028 11:26:01.326760    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1028 11:26:01.366594    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 11:26:01.387384    3404 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 11:26:01.398657    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 11:26:01.433340    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:26:01.469680    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 11:26:01.504099    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:26:01.541281    3404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:26:01.575337    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 11:26:01.607447    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 11:26:01.640962    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 11:26:01.673727    3404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:26:01.693418    3404 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:26:01.705052    3404 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:26:01.739561    3404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:26:01.768235    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:26:01.996681    3404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 11:26:02.028859    3404 start.go:495] detecting cgroup driver to use...
	I1028 11:26:02.040484    3404 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 11:26:02.079820    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:26:02.117997    3404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:26:02.160672    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:26:02.200954    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 11:26:02.238318    3404 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1028 11:26:02.300890    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 11:26:02.325769    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:26:02.376682    3404 ssh_runner.go:195] Run: which cri-dockerd
	I1028 11:26:02.393651    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 11:26:02.412029    3404 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1028 11:26:02.455275    3404 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 11:26:02.696531    3404 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 11:26:02.891957    3404 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 11:26:02.891957    3404 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 11:26:02.934953    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:26:03.164029    3404 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 11:26:05.768119    3404 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6040159s)
	I1028 11:26:05.780334    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 11:26:05.820842    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 11:26:05.858384    3404 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 11:26:06.074653    3404 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 11:26:06.281331    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:26:06.479459    3404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 11:26:06.523960    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 11:26:06.561086    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:26:06.772706    3404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 11:26:06.895054    3404 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 11:26:06.907221    3404 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 11:26:06.916450    3404 start.go:563] Will wait 60s for crictl version
	I1028 11:26:06.928147    3404 ssh_runner.go:195] Run: which crictl
	I1028 11:26:06.946215    3404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:26:07.003964    3404 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1028 11:26:07.013272    3404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 11:26:07.067503    3404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 11:26:07.108854    3404 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1028 11:26:07.108854    3404 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1028 11:26:07.113922    3404 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1028 11:26:07.113973    3404 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1028 11:26:07.113973    3404 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1028 11:26:07.113973    3404 ip.go:211] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:23:7f:d2 Flags:up|broadcast|multicast|running}
	I1028 11:26:07.117380    3404 ip.go:214] interface addr: fe80::866e:dfb9:193a:741e/64
	I1028 11:26:07.117380    3404 ip.go:214] interface addr: 172.27.240.1/20
	I1028 11:26:07.131924    3404 ssh_runner.go:195] Run: grep 172.27.240.1	host.minikube.internal$ /etc/hosts
	I1028 11:26:07.138514    3404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:26:07.171895    3404 kubeadm.go:883] updating cluster {Name:ha-201400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.248.250 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:26:07.171895    3404 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 11:26:07.181258    3404 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 11:26:07.205286    3404 docker.go:689] Got preloaded images: 
	I1028 11:26:07.205338    3404 docker.go:695] registry.k8s.io/kube-apiserver:v1.31.2 wasn't preloaded
	I1028 11:26:07.217508    3404 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 11:26:07.248905    3404 ssh_runner.go:195] Run: which lz4
	I1028 11:26:07.257257    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1028 11:26:07.272547    3404 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 11:26:07.279996    3404 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 11:26:07.280167    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (343199686 bytes)
	I1028 11:26:09.168462    3404 docker.go:653] duration metric: took 1.9111828s to copy over tarball
	I1028 11:26:09.181877    3404 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 11:26:17.265329    3404 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.0833599s)
	I1028 11:26:17.265329    3404 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 11:26:17.348168    3404 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 11:26:17.367439    3404 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1028 11:26:18.759399    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:26:18.973666    3404 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 11:26:21.628547    3404 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6548509s)
	I1028 11:26:21.639809    3404 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 11:26:21.672280    3404 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 11:26:21.672443    3404 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:26:21.672443    3404 kubeadm.go:934] updating node { 172.27.248.250 8443 v1.31.2 docker true true} ...
	I1028 11:26:21.672702    3404 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-201400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.248.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:26:21.682254    3404 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1028 11:26:21.755750    3404 cni.go:84] Creating CNI manager for ""
	I1028 11:26:21.755832    3404 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:26:21.755872    3404 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:26:21.755938    3404 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.248.250 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-201400 NodeName:ha-201400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.248.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.248.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:26:21.756136    3404 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.248.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-201400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.27.248.250"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.248.250"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:26:21.756283    3404 kube-vip.go:115] generating kube-vip config ...
	I1028 11:26:21.767851    3404 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:26:21.795259    3404 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:26:21.795387    3404 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:26:21.806835    3404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:26:21.826970    3404 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:26:21.838537    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 11:26:21.858302    3404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1028 11:26:21.891379    3404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:26:21.921780    3404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1028 11:26:21.956895    3404 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1028 11:26:22.002521    3404 ssh_runner.go:195] Run: grep 172.27.255.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:26:22.009560    3404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:26:22.045830    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:26:22.261348    3404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:26:22.298295    3404 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400 for IP: 172.27.248.250
	I1028 11:26:22.298388    3404 certs.go:194] generating shared ca certs ...
	I1028 11:26:22.298447    3404 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:22.298571    3404 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I1028 11:26:22.298571    3404 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I1028 11:26:22.298571    3404 certs.go:256] generating profile certs ...
	I1028 11:26:22.298571    3404 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\client.key
	I1028 11:26:22.298571    3404 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\client.crt with IP's: []
	I1028 11:26:22.361747    3404 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\client.crt ...
	I1028 11:26:22.361747    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\client.crt: {Name:mkc73e42285e6173fedba85ce6073b39b49eaa4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:22.363617    3404 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\client.key ...
	I1028 11:26:22.363617    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\client.key: {Name:mk352d8d9096b4da61558569d3583a91f9774340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:22.364243    3404 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.6de85fe5
	I1028 11:26:22.365239    3404 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.6de85fe5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.248.250 172.27.255.254]
	I1028 11:26:22.598792    3404 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.6de85fe5 ...
	I1028 11:26:22.598792    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.6de85fe5: {Name:mkd2b27f659177c16b390d5504556630de468537 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:22.600209    3404 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.6de85fe5 ...
	I1028 11:26:22.600209    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.6de85fe5: {Name:mk9724f5d53c33b68a93a081cb10ad12cf0d1375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:22.601257    3404 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.6de85fe5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt
	I1028 11:26:22.614851    3404 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.6de85fe5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key
	I1028 11:26:22.616844    3404 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key
	I1028 11:26:22.617433    3404 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt with IP's: []
	I1028 11:26:22.912167    3404 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt ...
	I1028 11:26:22.913287    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt: {Name:mk5f04bf38ef925a1e509f5e1f07ddbecad69152 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:22.914873    3404 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key ...
	I1028 11:26:22.914873    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key: {Name:mkdba9b4bd7ac2bc479ed6817470eed2c30be6cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:22.915274    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:26:22.916390    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:26:22.916552    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:26:22.916661    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:26:22.916964    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:26:22.917191    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:26:22.917380    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:26:22.927650    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:26:22.928828    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem (1338 bytes)
	W1028 11:26:22.929512    3404 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608_empty.pem, impossibly tiny 0 bytes
	I1028 11:26:22.929512    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1028 11:26:22.929863    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1028 11:26:22.930365    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1028 11:26:22.930572    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1028 11:26:22.930924    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem (1708 bytes)
	I1028 11:26:22.930924    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:26:22.930924    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem -> /usr/share/ca-certificates/9608.pem
	I1028 11:26:22.930924    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /usr/share/ca-certificates/96082.pem
	I1028 11:26:22.933328    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:26:22.987072    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:26:23.035501    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:26:23.091383    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:26:23.143544    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 11:26:23.195907    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:26:23.245549    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:26:23.292533    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:26:23.336317    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:26:23.377946    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem --> /usr/share/ca-certificates/9608.pem (1338 bytes)
	I1028 11:26:23.424369    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /usr/share/ca-certificates/96082.pem (1708 bytes)
	I1028 11:26:23.475936    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:26:23.523304    3404 ssh_runner.go:195] Run: openssl version
	I1028 11:26:23.546779    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:26:23.583281    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:26:23.591819    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:26:23.601651    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:26:23.625357    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:26:23.656703    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9608.pem && ln -fs /usr/share/ca-certificates/9608.pem /etc/ssl/certs/9608.pem"
	I1028 11:26:23.690110    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9608.pem
	I1028 11:26:23.697841    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:03 /usr/share/ca-certificates/9608.pem
	I1028 11:26:23.707741    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9608.pem
	I1028 11:26:23.729094    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9608.pem /etc/ssl/certs/51391683.0"
	I1028 11:26:23.761341    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96082.pem && ln -fs /usr/share/ca-certificates/96082.pem /etc/ssl/certs/96082.pem"
	I1028 11:26:23.792722    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96082.pem
	I1028 11:26:23.799806    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:03 /usr/share/ca-certificates/96082.pem
	I1028 11:26:23.810899    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96082.pem
	I1028 11:26:23.832216    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96082.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:26:23.864349    3404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:26:23.871370    3404 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:26:23.871904    3404 kubeadm.go:392] StartCluster: {Name:ha-201400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clu
sterName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.248.250 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:26:23.881233    3404 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 11:26:23.917901    3404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 11:26:23.952239    3404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 11:26:23.982231    3404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 11:26:24.002195    3404 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 11:26:24.002195    3404 kubeadm.go:157] found existing configuration files:
	
	I1028 11:26:24.012232    3404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 11:26:24.034200    3404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 11:26:24.048200    3404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 11:26:24.086190    3404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 11:26:24.101998    3404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 11:26:24.117170    3404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 11:26:24.150337    3404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 11:26:24.168916    3404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 11:26:24.181040    3404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 11:26:24.210838    3404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 11:26:24.228476    3404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 11:26:24.238470    3404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 11:26:24.257899    3404 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 11:26:24.720246    3404 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 11:26:40.392712    3404 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 11:26:40.392899    3404 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 11:26:40.393130    3404 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 11:26:40.393258    3404 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 11:26:40.393625    3404 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 11:26:40.393817    3404 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 11:26:40.397274    3404 out.go:235]   - Generating certificates and keys ...
	I1028 11:26:40.397624    3404 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 11:26:40.397836    3404 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 11:26:40.398036    3404 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 11:26:40.398274    3404 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 11:26:40.398416    3404 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 11:26:40.398663    3404 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 11:26:40.399087    3404 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 11:26:40.399540    3404 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-201400 localhost] and IPs [172.27.248.250 127.0.0.1 ::1]
	I1028 11:26:40.399711    3404 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 11:26:40.399711    3404 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-201400 localhost] and IPs [172.27.248.250 127.0.0.1 ::1]
	I1028 11:26:40.399711    3404 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 11:26:40.399711    3404 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 11:26:40.400336    3404 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 11:26:40.400336    3404 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 11:26:40.400500    3404 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 11:26:40.400500    3404 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 11:26:40.400500    3404 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 11:26:40.400500    3404 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 11:26:40.401144    3404 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 11:26:40.401176    3404 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 11:26:40.401176    3404 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 11:26:40.403707    3404 out.go:235]   - Booting up control plane ...
	I1028 11:26:40.404357    3404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 11:26:40.404469    3404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 11:26:40.404469    3404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 11:26:40.404469    3404 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 11:26:40.405069    3404 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 11:26:40.405069    3404 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 11:26:40.405069    3404 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 11:26:40.405704    3404 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 11:26:40.405704    3404 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002597397s
	I1028 11:26:40.406287    3404 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 11:26:40.406287    3404 kubeadm.go:310] [api-check] The API server is healthy after 8.854898091s
	I1028 11:26:40.406287    3404 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 11:26:40.406287    3404 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 11:26:40.406287    3404 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 11:26:40.407400    3404 kubeadm.go:310] [mark-control-plane] Marking the node ha-201400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 11:26:40.407400    3404 kubeadm.go:310] [bootstrap-token] Using token: ur7fzz.cobvstbgnh3qhf27
	I1028 11:26:40.409992    3404 out.go:235]   - Configuring RBAC rules ...
	I1028 11:26:40.409992    3404 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 11:26:40.409992    3404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 11:26:40.410947    3404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 11:26:40.410947    3404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 11:26:40.410947    3404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 11:26:40.410947    3404 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 11:26:40.411958    3404 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 11:26:40.411958    3404 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 11:26:40.411958    3404 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 11:26:40.411958    3404 kubeadm.go:310] 
	I1028 11:26:40.411958    3404 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 11:26:40.411958    3404 kubeadm.go:310] 
	I1028 11:26:40.411958    3404 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 11:26:40.411958    3404 kubeadm.go:310] 
	I1028 11:26:40.411958    3404 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 11:26:40.412983    3404 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 11:26:40.413060    3404 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 11:26:40.413060    3404 kubeadm.go:310] 
	I1028 11:26:40.413060    3404 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 11:26:40.413267    3404 kubeadm.go:310] 
	I1028 11:26:40.413267    3404 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 11:26:40.413267    3404 kubeadm.go:310] 
	I1028 11:26:40.413481    3404 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 11:26:40.413610    3404 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 11:26:40.413610    3404 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 11:26:40.413610    3404 kubeadm.go:310] 
	I1028 11:26:40.413610    3404 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 11:26:40.414333    3404 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 11:26:40.414395    3404 kubeadm.go:310] 
	I1028 11:26:40.414562    3404 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ur7fzz.cobvstbgnh3qhf27 \
	I1028 11:26:40.414829    3404 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b \
	I1028 11:26:40.414829    3404 kubeadm.go:310] 	--control-plane 
	I1028 11:26:40.414829    3404 kubeadm.go:310] 
	I1028 11:26:40.415245    3404 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 11:26:40.415306    3404 kubeadm.go:310] 
	I1028 11:26:40.415542    3404 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ur7fzz.cobvstbgnh3qhf27 \
	I1028 11:26:40.415779    3404 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b 
	I1028 11:26:40.415779    3404 cni.go:84] Creating CNI manager for ""
	I1028 11:26:40.415779    3404 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:26:40.419458    3404 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 11:26:40.441295    3404 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 11:26:40.450633    3404 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 11:26:40.450633    3404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 11:26:40.514309    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 11:26:41.386419    3404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 11:26:41.400339    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:26:41.400339    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-201400 minikube.k8s.io/updated_at=2024_10_28T11_26_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-201400 minikube.k8s.io/primary=true
	I1028 11:26:41.444480    3404 ops.go:34] apiserver oom_adj: -16
	I1028 11:26:41.692663    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:26:42.194100    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:26:42.694196    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:26:43.193901    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:26:43.694160    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:26:44.196227    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:26:44.373890    3404 kubeadm.go:1113] duration metric: took 2.9874367s to wait for elevateKubeSystemPrivileges
	I1028 11:26:44.374020    3404 kubeadm.go:394] duration metric: took 20.5017543s to StartCluster
	I1028 11:26:44.374068    3404 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:44.374306    3404 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 11:26:44.375789    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:44.377505    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 11:26:44.377505    3404 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.248.250 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:26:44.377505    3404 start.go:241] waiting for startup goroutines ...
	I1028 11:26:44.377505    3404 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 11:26:44.377505    3404 addons.go:69] Setting storage-provisioner=true in profile "ha-201400"
	I1028 11:26:44.377505    3404 addons.go:69] Setting default-storageclass=true in profile "ha-201400"
	I1028 11:26:44.378127    3404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-201400"
	I1028 11:26:44.377505    3404 addons.go:234] Setting addon storage-provisioner=true in "ha-201400"
	I1028 11:26:44.378200    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:26:44.378345    3404 host.go:66] Checking if "ha-201400" exists ...
	I1028 11:26:44.378752    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:26:44.379729    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:26:44.671126    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 11:26:45.248868    3404 start.go:971] {"host.minikube.internal": 172.27.240.1} host record injected into CoreDNS's ConfigMap
	I1028 11:26:46.726590    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:26:46.726714    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:46.727515    3404 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 11:26:46.727515    3404 kapi.go:59] client config for ha-201400: &rest.Config{Host:"https://172.27.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-201400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-201400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29cb3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 11:26:46.730750    3404 addons.go:234] Setting addon default-storageclass=true in "ha-201400"
	I1028 11:26:46.730939    3404 host.go:66] Checking if "ha-201400" exists ...
	I1028 11:26:46.732008    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:26:46.732273    3404 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 11:26:46.785520    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:26:46.785520    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:46.803181    3404 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:26:46.806490    3404 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:26:46.806490    3404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:26:46.806581    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:26:49.157130    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:26:49.157130    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:49.157407    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:26:49.250186    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:26:49.250778    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:49.250986    3404 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:26:49.250986    3404 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:26:49.251126    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:26:51.578904    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:26:51.578904    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:51.578988    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:26:51.999136    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:26:51.999864    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:52.000363    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:26:52.170502    3404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:26:54.349037    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:26:54.349091    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:54.349091    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:26:54.488132    3404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:26:54.690078    3404 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 11:26:54.690078    3404 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 11:26:54.690954    3404 round_trippers.go:463] GET https://172.27.255.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1028 11:26:54.690954    3404 round_trippers.go:469] Request Headers:
	I1028 11:26:54.690954    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:26:54.690954    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:26:54.707478    3404 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1028 11:26:54.708398    3404 round_trippers.go:463] PUT https://172.27.255.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 11:26:54.708398    3404 round_trippers.go:469] Request Headers:
	I1028 11:26:54.708398    3404 round_trippers.go:473]     Content-Type: application/json
	I1028 11:26:54.708398    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:26:54.708398    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:26:54.712648    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:26:54.715845    3404 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 11:26:54.719644    3404 addons.go:510] duration metric: took 10.3420229s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 11:26:54.719644    3404 start.go:246] waiting for cluster config update ...
	I1028 11:26:54.719644    3404 start.go:255] writing updated cluster config ...
	I1028 11:26:54.722734    3404 out.go:201] 
	I1028 11:26:54.741658    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:26:54.741806    3404 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json ...
	I1028 11:26:54.747673    3404 out.go:177] * Starting "ha-201400-m02" control-plane node in "ha-201400" cluster
	I1028 11:26:54.750222    3404 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 11:26:54.750222    3404 cache.go:56] Caching tarball of preloaded images
	I1028 11:26:54.750222    3404 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1028 11:26:54.750776    3404 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 11:26:54.750994    3404 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json ...
	I1028 11:26:54.762883    3404 start.go:360] acquireMachinesLock for ha-201400-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:26:54.762883    3404 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-201400-m02"
	I1028 11:26:54.762883    3404 start.go:93] Provisioning new machine with config: &{Name:ha-201400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.2 ClusterName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.248.250 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:26:54.762883    3404 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1028 11:26:54.767460    3404 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:26:54.767460    3404 start.go:159] libmachine.API.Create for "ha-201400" (driver="hyperv")
	I1028 11:26:54.768021    3404 client.go:168] LocalClient.Create starting
	I1028 11:26:54.768233    3404 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I1028 11:26:54.768233    3404 main.go:141] libmachine: Decoding PEM data...
	I1028 11:26:54.768233    3404 main.go:141] libmachine: Parsing certificate...
	I1028 11:26:54.769022    3404 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I1028 11:26:54.769251    3404 main.go:141] libmachine: Decoding PEM data...
	I1028 11:26:54.769251    3404 main.go:141] libmachine: Parsing certificate...
	I1028 11:26:54.769251    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1028 11:26:56.829949    3404 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1028 11:26:56.829949    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:56.830054    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1028 11:26:58.706617    3404 main.go:141] libmachine: [stdout =====>] : False
	
	I1028 11:26:58.707402    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:58.707504    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1028 11:27:00.296323    3404 main.go:141] libmachine: [stdout =====>] : True
	
	I1028 11:27:00.296685    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:00.296685    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1028 11:27:04.100440    3404 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1028 11:27:04.100440    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:04.103637    3404 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:27:04.626055    3404 main.go:141] libmachine: Creating SSH key...
	I1028 11:27:04.802036    3404 main.go:141] libmachine: Creating VM...
	I1028 11:27:04.802036    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1028 11:27:07.855942    3404 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1028 11:27:07.855942    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:07.855942    3404 main.go:141] libmachine: Using switch "Default Switch"
	I1028 11:27:07.855942    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1028 11:27:09.820105    3404 main.go:141] libmachine: [stdout =====>] : True
	
	I1028 11:27:09.820324    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:09.820459    3404 main.go:141] libmachine: Creating VHD
	I1028 11:27:09.820459    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1028 11:27:13.667051    3404 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 32F6B9B4-1EAE-4BBC-AB35-E730EDC8FD37
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1028 11:27:13.667051    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:13.667051    3404 main.go:141] libmachine: Writing magic tar header
	I1028 11:27:13.667051    3404 main.go:141] libmachine: Writing SSH key tar header
	I1028 11:27:13.679270    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1028 11:27:16.961999    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:16.961999    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:16.962148    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\disk.vhd' -SizeBytes 20000MB
	I1028 11:27:19.610411    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:19.610411    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:19.610411    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-201400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1028 11:27:23.376062    3404 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-201400-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1028 11:27:23.376206    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:23.376259    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-201400-m02 -DynamicMemoryEnabled $false
	I1028 11:27:25.750743    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:25.751024    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:25.751209    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-201400-m02 -Count 2
	I1028 11:27:28.019983    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:28.020659    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:28.020659    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-201400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\boot2docker.iso'
	I1028 11:27:30.730521    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:30.730521    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:30.731242    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-201400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\disk.vhd'
	I1028 11:27:33.515880    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:33.516296    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:33.516296    3404 main.go:141] libmachine: Starting VM...
	I1028 11:27:33.516360    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-201400-m02
	I1028 11:27:36.777107    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:36.777107    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:36.777107    3404 main.go:141] libmachine: Waiting for host to start...
	I1028 11:27:36.777107    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:27:39.171191    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:27:39.171191    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:39.171450    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:27:41.858125    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:41.858125    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:42.858440    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:27:45.202781    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:27:45.202984    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:45.202984    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:27:47.874624    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:47.875626    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:48.876621    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:27:51.217419    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:27:51.217419    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:51.218097    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:27:53.870442    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:53.870442    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:54.871157    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:27:57.207369    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:27:57.207369    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:57.207449    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:27:59.819718    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:59.819718    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:00.820964    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:03.213229    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:03.213229    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:03.213229    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:06.031533    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:06.031533    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:06.031533    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:08.464115    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:08.464756    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:08.464756    3404 machine.go:93] provisionDockerMachine start ...
	I1028 11:28:08.464915    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:10.856985    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:10.856985    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:10.857795    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:13.648224    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:13.648224    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:13.654295    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:28:13.669739    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.174 22 <nil> <nil>}
	I1028 11:28:13.669830    3404 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 11:28:13.807194    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 11:28:13.807299    3404 buildroot.go:166] provisioning hostname "ha-201400-m02"
	I1028 11:28:13.807389    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:16.112936    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:16.113106    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:16.113106    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:18.888769    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:18.888769    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:18.895829    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:28:18.896572    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.174 22 <nil> <nil>}
	I1028 11:28:18.896572    3404 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-201400-m02 && echo "ha-201400-m02" | sudo tee /etc/hostname
	I1028 11:28:19.075659    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-201400-m02
	
	I1028 11:28:19.075748    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:21.475184    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:21.475184    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:21.475184    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:24.215339    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:24.215339    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:24.220660    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:28:24.221828    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.174 22 <nil> <nil>}
	I1028 11:28:24.221828    3404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-201400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-201400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-201400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:28:24.372236    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:28:24.372299    3404 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I1028 11:28:24.372363    3404 buildroot.go:174] setting up certificates
	I1028 11:28:24.372363    3404 provision.go:84] configureAuth start
	I1028 11:28:24.372489    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:26.626891    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:26.626891    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:26.626891    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:29.344159    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:29.344961    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:29.344961    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:31.634677    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:31.634677    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:31.634677    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:34.390067    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:34.390895    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:34.390895    3404 provision.go:143] copyHostCerts
	I1028 11:28:34.391041    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I1028 11:28:34.391041    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I1028 11:28:34.391041    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I1028 11:28:34.391740    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1028 11:28:34.392945    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I1028 11:28:34.393221    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I1028 11:28:34.393221    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I1028 11:28:34.393555    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1028 11:28:34.394843    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I1028 11:28:34.395085    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I1028 11:28:34.395222    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I1028 11:28:34.395614    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I1028 11:28:34.396799    3404 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-201400-m02 san=[127.0.0.1 172.27.250.174 ha-201400-m02 localhost minikube]
	I1028 11:28:34.834801    3404 provision.go:177] copyRemoteCerts
	I1028 11:28:34.845751    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:28:34.845751    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:37.169582    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:37.169697    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:37.169825    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:39.868014    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:39.868014    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:39.868862    3404 sshutil.go:53] new ssh client: &{IP:172.27.250.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\id_rsa Username:docker}
	I1028 11:28:39.986833    3404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1410237s)
	I1028 11:28:39.986955    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1028 11:28:39.987333    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:28:40.053103    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1028 11:28:40.053103    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 11:28:40.107701    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1028 11:28:40.108707    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:28:40.163487    3404 provision.go:87] duration metric: took 15.7908827s to configureAuth
	I1028 11:28:40.163546    3404 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:28:40.164176    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:28:40.164176    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:42.431210    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:42.431210    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:42.431325    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:45.252182    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:45.252182    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:45.258901    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:28:45.259092    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.174 22 <nil> <nil>}
	I1028 11:28:45.259092    3404 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 11:28:45.391961    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 11:28:45.392025    3404 buildroot.go:70] root file system type: tmpfs
	I1028 11:28:45.392278    3404 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 11:28:45.392278    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:47.684389    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:47.684869    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:47.684869    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:50.441568    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:50.442597    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:50.448703    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:28:50.449542    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.174 22 <nil> <nil>}
	I1028 11:28:50.449542    3404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.248.250"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 11:28:50.622070    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.248.250
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 11:28:50.622670    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:52.913976    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:52.913976    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:52.913976    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:55.675241    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:55.675241    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:55.684951    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:28:55.685429    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.174 22 <nil> <nil>}
	I1028 11:28:55.685504    3404 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 11:28:58.023426    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1028 11:28:58.023426    3404 machine.go:96] duration metric: took 49.5581113s to provisionDockerMachine
	I1028 11:28:58.023426    3404 client.go:171] duration metric: took 2m3.2539482s to LocalClient.Create
	I1028 11:28:58.023426    3404 start.go:167] duration metric: took 2m3.2545746s to libmachine.API.Create "ha-201400"
	I1028 11:28:58.023426    3404 start.go:293] postStartSetup for "ha-201400-m02" (driver="hyperv")
	I1028 11:28:58.023426    3404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:28:58.037003    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:28:58.037003    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:29:00.339341    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:00.339398    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:00.339398    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:03.070256    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:29:03.071306    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:03.072038    3404 sshutil.go:53] new ssh client: &{IP:172.27.250.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\id_rsa Username:docker}
	I1028 11:29:03.192445    3404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1553838s)
	I1028 11:29:03.204331    3404 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:29:03.211286    3404 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:29:03.211286    3404 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I1028 11:29:03.211286    3404 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I1028 11:29:03.212822    3404 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> 96082.pem in /etc/ssl/certs
	I1028 11:29:03.212822    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /etc/ssl/certs/96082.pem
	I1028 11:29:03.224479    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:29:03.244804    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /etc/ssl/certs/96082.pem (1708 bytes)
	I1028 11:29:03.296290    3404 start.go:296] duration metric: took 5.2728046s for postStartSetup
	I1028 11:29:03.299506    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:29:05.582782    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:05.582969    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:05.583589    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:08.257122    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:29:08.257122    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:08.257758    3404 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json ...
	I1028 11:29:08.260394    3404 start.go:128] duration metric: took 2m13.496005s to createHost
	I1028 11:29:08.260394    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:29:10.542578    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:10.543311    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:10.543378    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:13.220449    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:29:13.220449    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:13.226427    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:29:13.226842    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.174 22 <nil> <nil>}
	I1028 11:29:13.226842    3404 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:29:13.364434    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730114953.378785503
	
	I1028 11:29:13.364434    3404 fix.go:216] guest clock: 1730114953.378785503
	I1028 11:29:13.364434    3404 fix.go:229] Guest: 2024-10-28 11:29:13.378785503 +0000 UTC Remote: 2024-10-28 11:29:08.2603949 +0000 UTC m=+344.412007101 (delta=5.118390603s)
	I1028 11:29:13.364434    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:29:15.676995    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:15.677775    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:15.677775    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:18.403533    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:29:18.403533    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:18.409919    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:29:18.410499    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.174 22 <nil> <nil>}
	I1028 11:29:18.410499    3404 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1730114953
	I1028 11:29:18.554784    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 28 11:29:13 UTC 2024
	
	I1028 11:29:18.554784    3404 fix.go:236] clock set: Mon Oct 28 11:29:13 UTC 2024
	 (err=<nil>)
	I1028 11:29:18.554784    3404 start.go:83] releasing machines lock for "ha-201400-m02", held for 2m23.790278s
	I1028 11:29:18.555149    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:29:20.831634    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:20.831634    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:20.831634    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:23.544539    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:29:23.544539    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:23.548487    3404 out.go:177] * Found network options:
	I1028 11:29:23.551574    3404 out.go:177]   - NO_PROXY=172.27.248.250
	W1028 11:29:23.554334    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:29:23.559173    3404 out.go:177]   - NO_PROXY=172.27.248.250
	W1028 11:29:23.562136    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:29:23.563540    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:29:23.565457    3404 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1028 11:29:23.565457    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:29:23.574405    3404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 11:29:23.574405    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:29:25.897479    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:25.897479    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:25.897479    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:25.903822    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:25.903822    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:25.903951    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:28.677538    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:29:28.677538    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:28.678042    3404 sshutil.go:53] new ssh client: &{IP:172.27.250.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\id_rsa Username:docker}
	I1028 11:29:28.757700    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:29:28.758400    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:28.758458    3404 sshutil.go:53] new ssh client: &{IP:172.27.250.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\id_rsa Username:docker}
	I1028 11:29:28.776680    3404 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2022155s)
	W1028 11:29:28.776680    3404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:29:28.789888    3404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:29:28.818963    3404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:29:28.818963    3404 start.go:495] detecting cgroup driver to use...
	I1028 11:29:28.818963    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:29:28.827198    3404 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.261682s)
	W1028 11:29:28.827198    3404 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1028 11:29:28.876276    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1028 11:29:28.910874    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 11:29:28.931072    3404 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 11:29:28.943403    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1028 11:29:28.961264    3404 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1028 11:29:28.961264    3404 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1028 11:29:28.976649    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:29:29.011154    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 11:29:29.044776    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:29:29.092222    3404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:29:29.129462    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 11:29:29.163754    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 11:29:29.198945    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 11:29:29.233489    3404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:29:29.255753    3404 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:29:29.268071    3404 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:29:29.303310    3404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:29:29.333300    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:29:29.545275    3404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 11:29:29.578357    3404 start.go:495] detecting cgroup driver to use...
	I1028 11:29:29.590766    3404 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 11:29:29.627278    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:29:29.663367    3404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:29:29.733594    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:29:29.773102    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 11:29:29.814070    3404 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1028 11:29:29.901296    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 11:29:29.931996    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:29:29.986927    3404 ssh_runner.go:195] Run: which cri-dockerd
	I1028 11:29:30.005514    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 11:29:30.025769    3404 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1028 11:29:30.085203    3404 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 11:29:30.317291    3404 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 11:29:30.518281    3404 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 11:29:30.518400    3404 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 11:29:30.568016    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:29:30.779162    3404 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 11:29:33.389791    3404 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6105997s)
	I1028 11:29:33.402679    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 11:29:33.442916    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 11:29:33.480397    3404 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 11:29:33.697651    3404 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 11:29:33.912693    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:29:34.123465    3404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 11:29:34.172847    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 11:29:34.215095    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:29:34.438839    3404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 11:29:34.555007    3404 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 11:29:34.567577    3404 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 11:29:34.577206    3404 start.go:563] Will wait 60s for crictl version
	I1028 11:29:34.590283    3404 ssh_runner.go:195] Run: which crictl
	I1028 11:29:34.608633    3404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:29:34.686848    3404 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1028 11:29:34.696815    3404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 11:29:34.746268    3404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 11:29:34.788565    3404 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1028 11:29:34.790578    3404 out.go:177]   - env NO_PROXY=172.27.248.250
	I1028 11:29:34.793565    3404 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1028 11:29:34.798566    3404 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1028 11:29:34.798566    3404 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1028 11:29:34.798566    3404 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1028 11:29:34.798566    3404 ip.go:211] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:23:7f:d2 Flags:up|broadcast|multicast|running}
	I1028 11:29:34.801566    3404 ip.go:214] interface addr: fe80::866e:dfb9:193a:741e/64
	I1028 11:29:34.801566    3404 ip.go:214] interface addr: 172.27.240.1/20
	I1028 11:29:34.812567    3404 ssh_runner.go:195] Run: grep 172.27.240.1	host.minikube.internal$ /etc/hosts
	I1028 11:29:34.819067    3404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:29:34.843628    3404 mustload.go:65] Loading cluster: ha-201400
	I1028 11:29:34.844431    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:29:34.844975    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:29:37.118525    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:37.118525    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:37.118525    3404 host.go:66] Checking if "ha-201400" exists ...
	I1028 11:29:37.119177    3404 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400 for IP: 172.27.250.174
	I1028 11:29:37.119177    3404 certs.go:194] generating shared ca certs ...
	I1028 11:29:37.119177    3404 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:29:37.119879    3404 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I1028 11:29:37.119879    3404 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I1028 11:29:37.120536    3404 certs.go:256] generating profile certs ...
	I1028 11:29:37.120536    3404 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\client.key
	I1028 11:29:37.121109    3404 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.bf16c6db
	I1028 11:29:37.121364    3404 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.bf16c6db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.248.250 172.27.250.174 172.27.255.254]
	I1028 11:29:37.351648    3404 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.bf16c6db ...
	I1028 11:29:37.351648    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.bf16c6db: {Name:mkc25ff31e988b8df10b3ffb0ba6e4f6e901478b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:29:37.353626    3404 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.bf16c6db ...
	I1028 11:29:37.353626    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.bf16c6db: {Name:mk59b62ce762b421cd03d39be8b38667a90ff6d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:29:37.354289    3404 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.bf16c6db -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt
	I1028 11:29:37.370213    3404 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.bf16c6db -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key
	I1028 11:29:37.372220    3404 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key
	I1028 11:29:37.372220    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:29:37.372220    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:29:37.372220    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:29:37.373215    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:29:37.373215    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:29:37.373215    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:29:37.373215    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:29:37.373215    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:29:37.374221    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem (1338 bytes)
	W1028 11:29:37.374221    3404 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608_empty.pem, impossibly tiny 0 bytes
	I1028 11:29:37.374221    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1028 11:29:37.374221    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1028 11:29:37.374221    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1028 11:29:37.375667    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1028 11:29:37.376307    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem (1708 bytes)
	I1028 11:29:37.376701    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:29:37.376902    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem -> /usr/share/ca-certificates/9608.pem
	I1028 11:29:37.377226    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /usr/share/ca-certificates/96082.pem
	I1028 11:29:37.377226    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:29:39.624399    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:39.624592    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:39.624661    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:42.328260    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:29:42.328260    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:42.328844    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:29:42.435833    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:29:42.445230    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:29:42.477996    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:29:42.485781    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 11:29:42.524646    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:29:42.532360    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:29:42.565164    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:29:42.573160    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:29:42.607661    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:29:42.614190    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:29:42.647035    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:29:42.653640    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:29:42.674361    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:29:42.725662    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:29:42.778833    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:29:42.839953    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:29:42.888963    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 11:29:42.938465    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:29:42.987636    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:29:43.035758    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:29:43.088857    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:29:43.142444    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem --> /usr/share/ca-certificates/9608.pem (1338 bytes)
	I1028 11:29:43.192448    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /usr/share/ca-certificates/96082.pem (1708 bytes)
	I1028 11:29:43.243043    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:29:43.277462    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 11:29:43.310856    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:29:43.349308    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:29:43.385202    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:29:43.426286    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:29:43.467007    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:29:43.513436    3404 ssh_runner.go:195] Run: openssl version
	I1028 11:29:43.535531    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:29:43.568535    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:29:43.575201    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:29:43.587727    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:29:43.608324    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:29:43.639950    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9608.pem && ln -fs /usr/share/ca-certificates/9608.pem /etc/ssl/certs/9608.pem"
	I1028 11:29:43.675552    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9608.pem
	I1028 11:29:43.682695    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:03 /usr/share/ca-certificates/9608.pem
	I1028 11:29:43.694765    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9608.pem
	I1028 11:29:43.715777    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9608.pem /etc/ssl/certs/51391683.0"
	I1028 11:29:43.747358    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96082.pem && ln -fs /usr/share/ca-certificates/96082.pem /etc/ssl/certs/96082.pem"
	I1028 11:29:43.780964    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96082.pem
	I1028 11:29:43.788082    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:03 /usr/share/ca-certificates/96082.pem
	I1028 11:29:43.799896    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96082.pem
	I1028 11:29:43.821274    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96082.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:29:43.854543    3404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:29:43.860832    3404 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:29:43.860832    3404 kubeadm.go:934] updating node {m02 172.27.250.174 8443 v1.31.2 docker true true} ...
	I1028 11:29:43.861372    3404 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-201400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.250.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:29:43.861430    3404 kube-vip.go:115] generating kube-vip config ...
	I1028 11:29:43.873321    3404 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:29:43.903097    3404 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:29:43.903176    3404 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:29:43.915803    3404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:29:43.935872    3404 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:29:43.947252    3404 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:29:43.970109    3404 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubectl
	I1028 11:29:43.970218    3404 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubelet
	I1028 11:29:43.970218    3404 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubeadm
	I1028 11:29:45.119963    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:29:45.132967    3404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:29:45.144925    3404 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:29:45.145610    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:29:45.411111    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:29:45.422104    3404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:29:45.437193    3404 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:29:45.437414    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:29:45.523389    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:29:45.581488    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:29:45.593484    3404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:29:45.610610    3404 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:29:45.610610    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:29:46.483786    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:29:46.502600    3404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1028 11:29:46.539821    3404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:29:46.571674    3404 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:29:46.614790    3404 ssh_runner.go:195] Run: grep 172.27.255.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:29:46.621997    3404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:29:46.655870    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:29:46.870364    3404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:29:46.903292    3404 host.go:66] Checking if "ha-201400" exists ...
	I1028 11:29:46.904033    3404 start.go:317] joinCluster: &{Name:ha-201400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.248.250 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.250.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:29:46.904033    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:29:46.904033    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:29:49.099609    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:49.099982    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:49.100088    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:51.818832    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:29:51.818832    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:51.819351    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:29:52.276138    3404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.3720439s)
	I1028 11:29:52.276138    3404 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.27.250.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:29:52.276138    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4qdjc.zt2t1z54vyly6fdz --discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-201400-m02 --control-plane --apiserver-advertise-address=172.27.250.174 --apiserver-bind-port=8443"
	I1028 11:30:37.343465    3404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4qdjc.zt2t1z54vyly6fdz --discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-201400-m02 --control-plane --apiserver-advertise-address=172.27.250.174 --apiserver-bind-port=8443": (45.0668178s)
	I1028 11:30:37.343525    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:30:38.168220    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-201400-m02 minikube.k8s.io/updated_at=2024_10_28T11_30_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-201400 minikube.k8s.io/primary=false
	I1028 11:30:38.396911    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-201400-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:30:38.582121    3404 start.go:319] duration metric: took 51.6775031s to joinCluster
	I1028 11:30:38.582196    3404 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.27.250.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:30:38.584044    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:30:38.588123    3404 out.go:177] * Verifying Kubernetes components...
	I1028 11:30:38.603562    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:30:39.073310    3404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:30:39.112696    3404 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 11:30:39.113586    3404 kapi.go:59] client config for ha-201400: &rest.Config{Host:"https://172.27.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-201400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-201400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29cb3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:30:39.113586    3404 kubeadm.go:483] Overriding stale ClientConfig host https://172.27.255.254:8443 with https://172.27.248.250:8443
	I1028 11:30:39.114644    3404 node_ready.go:35] waiting up to 6m0s for node "ha-201400-m02" to be "Ready" ...
	I1028 11:30:39.114644    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:39.114644    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:39.114644    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:39.114644    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:39.132097    3404 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1028 11:30:39.615565    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:39.615565    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:39.615565    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:39.615565    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:39.621955    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:40.115430    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:40.115430    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:40.115430    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:40.115430    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:40.122200    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:40.615588    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:40.615630    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:40.615670    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:40.615670    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:40.620386    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:30:41.115593    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:41.116268    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:41.116268    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:41.116268    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:41.121541    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:41.122815    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:41.615564    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:41.615564    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:41.615564    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:41.615564    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:41.621583    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:42.115978    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:42.115978    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:42.115978    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:42.116144    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:42.123656    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:30:42.615649    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:42.615649    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:42.615649    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:42.615649    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:42.622875    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:30:43.114836    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:43.114836    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:43.114836    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:43.114836    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:43.120285    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:43.615673    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:43.615673    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:43.615673    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:43.615673    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:43.623677    3404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 11:30:43.624666    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:44.115405    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:44.115405    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:44.115405    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:44.115405    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:44.121969    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:44.615248    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:44.615248    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:44.615248    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:44.615248    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:44.621225    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:45.115722    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:45.115722    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:45.115722    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:45.115722    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:45.133769    3404 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1028 11:30:45.615532    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:45.615600    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:45.615600    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:45.615600    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:45.624879    3404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:30:45.626181    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:46.115233    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:46.115233    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:46.115233    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:46.115233    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:46.159237    3404 round_trippers.go:574] Response Status: 200 OK in 44 milliseconds
	I1028 11:30:46.615360    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:46.615360    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:46.615360    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:46.615360    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:46.624874    3404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:30:47.115052    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:47.115052    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:47.115052    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:47.115052    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:47.120030    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:30:47.615970    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:47.616034    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:47.616095    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:47.616095    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:47.622485    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:48.115106    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:48.115106    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:48.115106    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:48.115106    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:48.120252    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:48.120252    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:48.615545    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:48.615545    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:48.615545    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:48.615545    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:48.622657    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:30:49.116164    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:49.116164    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:49.116164    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:49.116347    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:49.122761    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:49.614810    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:49.614810    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:49.614810    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:49.614810    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:49.620564    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:50.114989    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:50.114989    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:50.114989    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:50.114989    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:50.120830    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:50.121710    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:50.615723    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:50.616141    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:50.616141    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:50.616141    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:50.622622    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:51.114909    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:51.114909    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:51.114909    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:51.114909    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:51.121161    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:51.620225    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:51.620225    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:51.620225    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:51.620225    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:51.636623    3404 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1028 11:30:52.115494    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:52.115494    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:52.115494    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:52.115494    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:52.121658    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:52.122835    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:52.615746    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:52.615746    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:52.615746    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:52.615746    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:52.624411    3404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 11:30:53.115831    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:53.115945    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:53.115945    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:53.115945    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:53.121383    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:53.616170    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:53.616170    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:53.616170    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:53.616170    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:53.621282    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:54.115167    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:54.115167    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:54.115167    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:54.115167    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:54.136979    3404 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1028 11:30:54.140424    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:54.616048    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:54.616151    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:54.616151    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:54.616151    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:54.622586    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:55.115149    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:55.115695    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:55.115695    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:55.115695    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:55.122049    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:55.615525    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:55.615525    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:55.615525    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:55.615525    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:55.622274    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:56.115161    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:56.115161    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:56.115161    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:56.115161    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:56.120640    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:30:56.615852    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:56.616292    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:56.616292    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:56.616292    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:56.623488    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:56.624137    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:57.115655    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:57.115725    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:57.115725    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:57.115725    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:57.120582    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:30:57.615765    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:57.615765    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:57.615765    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:57.615765    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:57.621995    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:58.120219    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:58.120262    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:58.120262    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:58.120326    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:58.132880    3404 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1028 11:30:58.614957    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:58.614957    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:58.614957    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:58.614957    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:58.620921    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:59.115558    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:59.115682    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:59.115682    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:59.115682    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:59.121956    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:59.122523    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:59.615523    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:59.615523    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:59.615523    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:59.615523    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:59.630384    3404 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1028 11:31:00.115131    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:00.115131    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.115131    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.115131    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.124848    3404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:31:00.125775    3404 node_ready.go:49] node "ha-201400-m02" has status "Ready":"True"
	I1028 11:31:00.125942    3404 node_ready.go:38] duration metric: took 21.0108928s for node "ha-201400-m02" to be "Ready" ...
	I1028 11:31:00.125942    3404 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:31:00.126143    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:31:00.126192    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.126192    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.126192    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.152864    3404 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I1028 11:31:00.163453    3404 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n2qnf" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.164454    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-n2qnf
	I1028 11:31:00.164454    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.164454    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.164454    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.177756    3404 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1028 11:31:00.178520    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:00.178520    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.178604    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.178604    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.191578    3404 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1028 11:31:00.192605    3404 pod_ready.go:93] pod "coredns-7c65d6cfc9-n2qnf" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:00.192605    3404 pod_ready.go:82] duration metric: took 28.151ms for pod "coredns-7c65d6cfc9-n2qnf" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.192681    3404 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zt6f6" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.192832    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zt6f6
	I1028 11:31:00.192987    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.193094    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.193094    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.204089    3404 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1028 11:31:00.205024    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:00.206022    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.206022    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.206022    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.211751    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:31:00.212651    3404 pod_ready.go:93] pod "coredns-7c65d6cfc9-zt6f6" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:00.212651    3404 pod_ready.go:82] duration metric: took 19.9695ms for pod "coredns-7c65d6cfc9-zt6f6" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.212709    3404 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.212813    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-201400
	I1028 11:31:00.212907    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.212907    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.212907    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.219646    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:00.220417    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:00.220417    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.220417    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.220417    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.225034    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:31:00.226022    3404 pod_ready.go:93] pod "etcd-ha-201400" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:00.226022    3404 pod_ready.go:82] duration metric: took 13.3136ms for pod "etcd-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.226022    3404 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.226022    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-201400-m02
	I1028 11:31:00.226022    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.226022    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.226022    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.235103    3404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:31:00.235516    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:00.235516    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.235516    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.235516    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.242125    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:31:00.242125    3404 pod_ready.go:93] pod "etcd-ha-201400-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:00.242125    3404 pod_ready.go:82] duration metric: took 16.102ms for pod "etcd-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.242125    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.316779    3404 request.go:632] Waited for 74.654ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400
	I1028 11:31:00.317237    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400
	I1028 11:31:00.317299    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.317339    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.317374    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.323256    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:31:00.515618    3404 request.go:632] Waited for 191.5293ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:00.515958    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:00.515958    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.515958    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.515958    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.521230    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:31:00.521892    3404 pod_ready.go:93] pod "kube-apiserver-ha-201400" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:00.521892    3404 pod_ready.go:82] duration metric: took 279.7641ms for pod "kube-apiserver-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.521892    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.715606    3404 request.go:632] Waited for 193.3382ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400-m02
	I1028 11:31:00.716247    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400-m02
	I1028 11:31:00.716247    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.716247    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.716334    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.723095    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:00.915578    3404 request.go:632] Waited for 190.5016ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:00.915578    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:00.915578    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.915578    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.915578    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.921253    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:31:00.921970    3404 pod_ready.go:93] pod "kube-apiserver-ha-201400-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:00.922060    3404 pod_ready.go:82] duration metric: took 400.1634ms for pod "kube-apiserver-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.922060    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:01.115419    3404 request.go:632] Waited for 193.2129ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400
	I1028 11:31:01.115419    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400
	I1028 11:31:01.115419    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:01.115419    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:01.115419    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:01.122361    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:01.316005    3404 request.go:632] Waited for 192.3075ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:01.316525    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:01.316525    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:01.316525    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:01.316525    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:01.321710    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:31:01.322514    3404 pod_ready.go:93] pod "kube-controller-manager-ha-201400" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:01.322514    3404 pod_ready.go:82] duration metric: took 400.4494ms for pod "kube-controller-manager-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:01.322601    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:01.516314    3404 request.go:632] Waited for 193.6428ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400-m02
	I1028 11:31:01.516314    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400-m02
	I1028 11:31:01.516314    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:01.516314    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:01.516314    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:01.522527    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:01.715186    3404 request.go:632] Waited for 191.5876ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:01.715186    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:01.715186    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:01.715186    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:01.715186    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:01.722488    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:31:01.726074    3404 pod_ready.go:93] pod "kube-controller-manager-ha-201400-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:01.727347    3404 pod_ready.go:82] duration metric: took 404.7413ms for pod "kube-controller-manager-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:01.727347    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fg4c7" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:01.916212    3404 request.go:632] Waited for 188.6432ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fg4c7
	I1028 11:31:01.916756    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fg4c7
	I1028 11:31:01.916791    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:01.916791    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:01.916791    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:01.923093    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:02.116099    3404 request.go:632] Waited for 191.9217ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:02.116099    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:02.116099    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:02.116099    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:02.116099    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:02.122622    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:02.122700    3404 pod_ready.go:93] pod "kube-proxy-fg4c7" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:02.123247    3404 pod_ready.go:82] duration metric: took 395.8954ms for pod "kube-proxy-fg4c7" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:02.123247    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hkdzx" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:02.316767    3404 request.go:632] Waited for 193.5174ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkdzx
	I1028 11:31:02.317309    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkdzx
	I1028 11:31:02.317365    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:02.317365    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:02.317365    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:02.333733    3404 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1028 11:31:02.516083    3404 request.go:632] Waited for 181.1833ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:02.516083    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:02.516541    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:02.516541    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:02.516541    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:02.522426    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:31:02.523224    3404 pod_ready.go:93] pod "kube-proxy-hkdzx" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:02.523305    3404 pod_ready.go:82] duration metric: took 400.0532ms for pod "kube-proxy-hkdzx" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:02.523305    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:02.716046    3404 request.go:632] Waited for 192.5752ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400
	I1028 11:31:02.716046    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400
	I1028 11:31:02.716046    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:02.716046    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:02.716046    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:02.722716    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:02.915902    3404 request.go:632] Waited for 192.3158ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:02.915902    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:02.915902    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:02.915902    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:02.915902    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:02.926561    3404 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1028 11:31:02.927743    3404 pod_ready.go:93] pod "kube-scheduler-ha-201400" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:02.927743    3404 pod_ready.go:82] duration metric: took 404.4339ms for pod "kube-scheduler-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:02.927866    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:03.116036    3404 request.go:632] Waited for 188.1673ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400-m02
	I1028 11:31:03.116036    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400-m02
	I1028 11:31:03.116036    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:03.116036    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:03.116036    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:03.119621    3404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:31:03.318460    3404 request.go:632] Waited for 198.8366ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:03.318460    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:03.318460    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:03.318460    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:03.318460    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:03.325539    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:31:03.326336    3404 pod_ready.go:93] pod "kube-scheduler-ha-201400-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:03.326336    3404 pod_ready.go:82] duration metric: took 398.4655ms for pod "kube-scheduler-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:03.326336    3404 pod_ready.go:39] duration metric: took 3.2002865s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:31:03.326336    3404 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:31:03.339054    3404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:31:03.367679    3404 api_server.go:72] duration metric: took 24.7852027s to wait for apiserver process to appear ...
	I1028 11:31:03.367679    3404 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:31:03.367679    3404 api_server.go:253] Checking apiserver healthz at https://172.27.248.250:8443/healthz ...
	I1028 11:31:03.378574    3404 api_server.go:279] https://172.27.248.250:8443/healthz returned 200:
	ok
	I1028 11:31:03.378574    3404 round_trippers.go:463] GET https://172.27.248.250:8443/version
	I1028 11:31:03.378574    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:03.378574    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:03.378574    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:03.380673    3404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:31:03.380929    3404 api_server.go:141] control plane version: v1.31.2
	I1028 11:31:03.380929    3404 api_server.go:131] duration metric: took 13.25ms to wait for apiserver health ...
	I1028 11:31:03.381001    3404 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:31:03.515762    3404 request.go:632] Waited for 134.707ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:31:03.516178    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:31:03.516178    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:03.516178    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:03.516274    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:03.528080    3404 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1028 11:31:03.535230    3404 system_pods.go:59] 17 kube-system pods found
	I1028 11:31:03.535313    3404 system_pods.go:61] "coredns-7c65d6cfc9-n2qnf" [0a59b0c9-3860-43bb-9a01-6de060a0d8ec] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "coredns-7c65d6cfc9-zt6f6" [5670c78d-aab6-44c3-8f4a-09c21d9844fb] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "etcd-ha-201400" [54f8addb-a080-4679-b37b-561992049222] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "etcd-ha-201400-m02" [5928b40b-449f-4535-93ca-ecdcc4aad10c] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "kindnet-cwkwx" [dee24e8e-11df-4371-bcc4-036802ed78f7] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "kindnet-d99h6" [46efe528-d003-4a59-b6a3-f76548c4c236] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "kube-apiserver-ha-201400" [b77b449a-eea7-4ac8-a482-01d4405aaff1] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "kube-apiserver-ha-201400-m02" [f1405ed9-f514-4db2-acb3-f958193d27d4] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "kube-controller-manager-ha-201400" [bbfa9c31-e00f-4f74-97d0-d2b617a99bda] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "kube-controller-manager-ha-201400-m02" [bf76a2dc-b766-4642-92e4-15817852695d] Running
	I1028 11:31:03.535450    3404 system_pods.go:61] "kube-proxy-fg4c7" [a66dd88b-ed1d-4de7-b8cd-fec1b0e3b7c4] Running
	I1028 11:31:03.535450    3404 system_pods.go:61] "kube-proxy-hkdzx" [9c126bc0-db8a-442f-8e5a-a3cff3771d84] Running
	I1028 11:31:03.535491    3404 system_pods.go:61] "kube-scheduler-ha-201400" [942de188-c1af-4430-b6d5-0eada56b52a7] Running
	I1028 11:31:03.535491    3404 system_pods.go:61] "kube-scheduler-ha-201400-m02" [fcf968aa-fd96-4530-8f89-09d508d9a0a4] Running
	I1028 11:31:03.535491    3404 system_pods.go:61] "kube-vip-ha-201400" [01349bec-7565-423f-b7a3-f4901c3aac8c] Running
	I1028 11:31:03.535491    3404 system_pods.go:61] "kube-vip-ha-201400-m02" [b9349828-0d42-41c9-a291-1147a6c5e426] Running
	I1028 11:31:03.535491    3404 system_pods.go:61] "storage-provisioner" [3610335c-bd06-440a-b518-cf74dd5af220] Running
	I1028 11:31:03.535491    3404 system_pods.go:74] duration metric: took 154.4879ms to wait for pod list to return data ...
	I1028 11:31:03.535585    3404 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:31:03.715972    3404 request.go:632] Waited for 180.3188ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:31:03.716386    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:31:03.716493    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:03.716493    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:03.716493    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:03.723428    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:03.723491    3404 default_sa.go:45] found service account: "default"
	I1028 11:31:03.723491    3404 default_sa.go:55] duration metric: took 187.9036ms for default service account to be created ...
	I1028 11:31:03.723491    3404 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:31:03.915533    3404 request.go:632] Waited for 192.0396ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:31:03.915533    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:31:03.915533    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:03.915533    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:03.915533    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:03.932331    3404 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1028 11:31:03.939800    3404 system_pods.go:86] 17 kube-system pods found
	I1028 11:31:03.939883    3404 system_pods.go:89] "coredns-7c65d6cfc9-n2qnf" [0a59b0c9-3860-43bb-9a01-6de060a0d8ec] Running
	I1028 11:31:03.939883    3404 system_pods.go:89] "coredns-7c65d6cfc9-zt6f6" [5670c78d-aab6-44c3-8f4a-09c21d9844fb] Running
	I1028 11:31:03.939883    3404 system_pods.go:89] "etcd-ha-201400" [54f8addb-a080-4679-b37b-561992049222] Running
	I1028 11:31:03.939883    3404 system_pods.go:89] "etcd-ha-201400-m02" [5928b40b-449f-4535-93ca-ecdcc4aad10c] Running
	I1028 11:31:03.939883    3404 system_pods.go:89] "kindnet-cwkwx" [dee24e8e-11df-4371-bcc4-036802ed78f7] Running
	I1028 11:31:03.939883    3404 system_pods.go:89] "kindnet-d99h6" [46efe528-d003-4a59-b6a3-f76548c4c236] Running
	I1028 11:31:03.939883    3404 system_pods.go:89] "kube-apiserver-ha-201400" [b77b449a-eea7-4ac8-a482-01d4405aaff1] Running
	I1028 11:31:03.939883    3404 system_pods.go:89] "kube-apiserver-ha-201400-m02" [f1405ed9-f514-4db2-acb3-f958193d27d4] Running
	I1028 11:31:03.939944    3404 system_pods.go:89] "kube-controller-manager-ha-201400" [bbfa9c31-e00f-4f74-97d0-d2b617a99bda] Running
	I1028 11:31:03.940052    3404 system_pods.go:89] "kube-controller-manager-ha-201400-m02" [bf76a2dc-b766-4642-92e4-15817852695d] Running
	I1028 11:31:03.940152    3404 system_pods.go:89] "kube-proxy-fg4c7" [a66dd88b-ed1d-4de7-b8cd-fec1b0e3b7c4] Running
	I1028 11:31:03.940152    3404 system_pods.go:89] "kube-proxy-hkdzx" [9c126bc0-db8a-442f-8e5a-a3cff3771d84] Running
	I1028 11:31:03.940152    3404 system_pods.go:89] "kube-scheduler-ha-201400" [942de188-c1af-4430-b6d5-0eada56b52a7] Running
	I1028 11:31:03.940152    3404 system_pods.go:89] "kube-scheduler-ha-201400-m02" [fcf968aa-fd96-4530-8f89-09d508d9a0a4] Running
	I1028 11:31:03.940152    3404 system_pods.go:89] "kube-vip-ha-201400" [01349bec-7565-423f-b7a3-f4901c3aac8c] Running
	I1028 11:31:03.940206    3404 system_pods.go:89] "kube-vip-ha-201400-m02" [b9349828-0d42-41c9-a291-1147a6c5e426] Running
	I1028 11:31:03.940206    3404 system_pods.go:89] "storage-provisioner" [3610335c-bd06-440a-b518-cf74dd5af220] Running
	I1028 11:31:03.940206    3404 system_pods.go:126] duration metric: took 216.7132ms to wait for k8s-apps to be running ...
	I1028 11:31:03.940206    3404 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:31:03.951195    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:31:03.982483    3404 system_svc.go:56] duration metric: took 42.2764ms WaitForService to wait for kubelet
	I1028 11:31:03.982483    3404 kubeadm.go:582] duration metric: took 25.3999999s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:31:03.982734    3404 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:31:04.115817    3404 request.go:632] Waited for 133.0502ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes
	I1028 11:31:04.115817    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes
	I1028 11:31:04.115817    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:04.115817    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:04.115817    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:04.122450    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:04.123721    3404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:31:04.123721    3404 node_conditions.go:123] node cpu capacity is 2
	I1028 11:31:04.123870    3404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:31:04.123870    3404 node_conditions.go:123] node cpu capacity is 2
	I1028 11:31:04.123870    3404 node_conditions.go:105] duration metric: took 141.1349ms to run NodePressure ...
	I1028 11:31:04.123920    3404 start.go:241] waiting for startup goroutines ...
	I1028 11:31:04.123920    3404 start.go:255] writing updated cluster config ...
	I1028 11:31:04.128708    3404 out.go:201] 
	I1028 11:31:04.146040    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:31:04.146287    3404 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json ...
	I1028 11:31:04.156971    3404 out.go:177] * Starting "ha-201400-m03" control-plane node in "ha-201400" cluster
	I1028 11:31:04.159815    3404 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 11:31:04.159948    3404 cache.go:56] Caching tarball of preloaded images
	I1028 11:31:04.160277    3404 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1028 11:31:04.160277    3404 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 11:31:04.160277    3404 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json ...
	I1028 11:31:04.165639    3404 start.go:360] acquireMachinesLock for ha-201400-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:31:04.165870    3404 start.go:364] duration metric: took 231.6µs to acquireMachinesLock for "ha-201400-m03"
	I1028 11:31:04.166208    3404 start.go:93] Provisioning new machine with config: &{Name:ha-201400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.2 ClusterName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.248.250 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.250.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false in
gress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:31:04.166322    3404 start.go:125] createHost starting for "m03" (driver="hyperv")
	I1028 11:31:04.171180    3404 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:31:04.171180    3404 start.go:159] libmachine.API.Create for "ha-201400" (driver="hyperv")
	I1028 11:31:04.171180    3404 client.go:168] LocalClient.Create starting
	I1028 11:31:04.172282    3404 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I1028 11:31:04.172569    3404 main.go:141] libmachine: Decoding PEM data...
	I1028 11:31:04.172569    3404 main.go:141] libmachine: Parsing certificate...
	I1028 11:31:04.172796    3404 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I1028 11:31:04.172984    3404 main.go:141] libmachine: Decoding PEM data...
	I1028 11:31:04.173077    3404 main.go:141] libmachine: Parsing certificate...
	I1028 11:31:04.173142    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1028 11:31:06.273171    3404 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1028 11:31:06.273171    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:06.273171    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1028 11:31:08.191092    3404 main.go:141] libmachine: [stdout =====>] : False
	
	I1028 11:31:08.191092    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:08.192052    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1028 11:31:09.805432    3404 main.go:141] libmachine: [stdout =====>] : True
	
	I1028 11:31:09.805432    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:09.805432    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1028 11:31:13.798323    3404 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1028 11:31:13.798323    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:13.800854    3404 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:31:14.329718    3404 main.go:141] libmachine: Creating SSH key...
	I1028 11:31:14.487405    3404 main.go:141] libmachine: Creating VM...
	I1028 11:31:14.487405    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1028 11:31:17.646183    3404 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1028 11:31:17.647119    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:17.647188    3404 main.go:141] libmachine: Using switch "Default Switch"
	I1028 11:31:17.647216    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1028 11:31:19.593763    3404 main.go:141] libmachine: [stdout =====>] : True
	
	I1028 11:31:19.594759    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:19.594806    3404 main.go:141] libmachine: Creating VHD
	I1028 11:31:19.594950    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I1028 11:31:23.482772    3404 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E3384BBC-27C0-454C-978E-068E6868F243
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1028 11:31:23.483638    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:23.483638    3404 main.go:141] libmachine: Writing magic tar header
	I1028 11:31:23.483900    3404 main.go:141] libmachine: Writing SSH key tar header
	I1028 11:31:23.495586    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I1028 11:31:26.834194    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:26.835007    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:26.835260    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\disk.vhd' -SizeBytes 20000MB
	I1028 11:31:29.590992    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:29.591346    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:29.591527    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-201400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1028 11:31:33.440822    3404 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-201400-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1028 11:31:33.440907    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:33.440907    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-201400-m03 -DynamicMemoryEnabled $false
	I1028 11:31:35.929563    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:35.929563    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:35.929563    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-201400-m03 -Count 2
	I1028 11:31:38.278858    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:38.279463    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:38.279463    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-201400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\boot2docker.iso'
	I1028 11:31:41.011172    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:41.011172    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:41.011172    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-201400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\disk.vhd'
	I1028 11:31:43.883810    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:43.883810    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:43.883810    3404 main.go:141] libmachine: Starting VM...
	I1028 11:31:43.883810    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-201400-m03
	I1028 11:31:47.163638    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:47.163638    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:47.164115    3404 main.go:141] libmachine: Waiting for host to start...
	I1028 11:31:47.164176    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:31:49.631032    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:31:49.631032    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:49.631911    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:31:52.311475    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:52.311766    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:53.312451    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:31:55.722707    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:31:55.723169    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:55.723360    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:31:58.424042    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:58.424433    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:59.424938    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:01.813681    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:01.814283    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:01.814419    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:04.525342    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:32:04.525342    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:05.525933    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:07.936429    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:07.936429    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:07.936429    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:10.638434    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:32:10.638434    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:11.640547    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:14.022562    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:14.022649    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:14.022649    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:16.819611    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:32:16.819611    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:16.820395    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:19.079573    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:19.079643    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:19.079704    3404 machine.go:93] provisionDockerMachine start ...
	I1028 11:32:19.079824    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:21.410052    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:21.410115    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:21.410115    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:24.153066    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:32:24.153066    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:24.159575    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:32:24.173058    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.254.230 22 <nil> <nil>}
	I1028 11:32:24.173058    3404 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 11:32:24.298523    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 11:32:24.298523    3404 buildroot.go:166] provisioning hostname "ha-201400-m03"
	I1028 11:32:24.298643    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:26.604882    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:26.604882    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:26.604882    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:29.335617    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:32:29.335774    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:29.342709    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:32:29.342785    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.254.230 22 <nil> <nil>}
	I1028 11:32:29.343519    3404 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-201400-m03 && echo "ha-201400-m03" | sudo tee /etc/hostname
	I1028 11:32:29.491741    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-201400-m03
	
	I1028 11:32:29.491741    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:31.813044    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:31.813121    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:31.813187    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:34.580181    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:32:34.580243    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:34.585735    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:32:34.586285    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.254.230 22 <nil> <nil>}
	I1028 11:32:34.586348    3404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-201400-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-201400-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-201400-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:32:34.739085    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:32:34.739158    3404 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I1028 11:32:34.739158    3404 buildroot.go:174] setting up certificates
	I1028 11:32:34.739236    3404 provision.go:84] configureAuth start
	I1028 11:32:34.739236    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:37.038417    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:37.038629    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:37.038629    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:39.850893    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:32:39.851830    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:39.851921    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:42.203798    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:42.203882    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:42.203882    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:44.974041    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:32:44.974041    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:44.974180    3404 provision.go:143] copyHostCerts
	I1028 11:32:44.974325    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I1028 11:32:44.974679    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I1028 11:32:44.974821    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I1028 11:32:44.975260    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1028 11:32:44.977021    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I1028 11:32:44.977422    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I1028 11:32:44.977487    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I1028 11:32:44.978016    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1028 11:32:44.978685    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I1028 11:32:44.979373    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I1028 11:32:44.979373    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I1028 11:32:44.979772    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I1028 11:32:44.981035    3404 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-201400-m03 san=[127.0.0.1 172.27.254.230 ha-201400-m03 localhost minikube]
	I1028 11:32:45.234548    3404 provision.go:177] copyRemoteCerts
	I1028 11:32:45.245541    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:32:45.245541    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:47.535918    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:47.535918    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:47.536472    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:50.301858    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:32:50.301858    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:50.302089    3404 sshutil.go:53] new ssh client: &{IP:172.27.254.230 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\id_rsa Username:docker}
	I1028 11:32:50.409365    3404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1637653s)
	I1028 11:32:50.409365    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1028 11:32:50.410018    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:32:50.462954    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1028 11:32:50.462954    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:32:50.526104    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1028 11:32:50.526104    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:32:50.583918    3404 provision.go:87] duration metric: took 15.844503s to configureAuth
	I1028 11:32:50.583918    3404 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:32:50.584982    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:32:50.585096    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:52.877277    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:52.877703    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:52.877703    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:55.618551    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:32:55.618551    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:55.625107    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:32:55.625643    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.254.230 22 <nil> <nil>}
	I1028 11:32:55.625643    3404 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 11:32:55.753218    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 11:32:55.753388    3404 buildroot.go:70] root file system type: tmpfs
	I1028 11:32:55.753506    3404 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 11:32:55.753605    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:58.095514    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:58.096206    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:58.096320    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:00.865268    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:00.865268    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:00.870598    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:33:00.871041    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.254.230 22 <nil> <nil>}
	I1028 11:33:00.871041    3404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.248.250"
	Environment="NO_PROXY=172.27.248.250,172.27.250.174"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 11:33:01.032161    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.248.250
	Environment=NO_PROXY=172.27.248.250,172.27.250.174
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 11:33:01.032241    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:33:03.317619    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:03.318691    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:03.318959    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:06.117473    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:06.118542    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:06.124212    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:33:06.124739    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.254.230 22 <nil> <nil>}
	I1028 11:33:06.124739    3404 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 11:33:08.409336    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1028 11:33:08.409421    3404 machine.go:96] duration metric: took 49.3291592s to provisionDockerMachine
	I1028 11:33:08.409421    3404 client.go:171] duration metric: took 2m4.2368364s to LocalClient.Create
	I1028 11:33:08.409476    3404 start.go:167] duration metric: took 2m4.2368364s to libmachine.API.Create "ha-201400"
	I1028 11:33:08.409476    3404 start.go:293] postStartSetup for "ha-201400-m03" (driver="hyperv")
	I1028 11:33:08.409514    3404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:33:08.421751    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:33:08.421751    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:33:10.745137    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:10.745641    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:10.745809    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:13.552220    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:13.552220    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:13.552220    3404 sshutil.go:53] new ssh client: &{IP:172.27.254.230 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\id_rsa Username:docker}
	I1028 11:33:13.665726    3404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2439155s)
	I1028 11:33:13.677860    3404 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:33:13.685488    3404 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:33:13.685488    3404 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I1028 11:33:13.685955    3404 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I1028 11:33:13.687085    3404 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> 96082.pem in /etc/ssl/certs
	I1028 11:33:13.687085    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /etc/ssl/certs/96082.pem
	I1028 11:33:13.702423    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:33:13.724872    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /etc/ssl/certs/96082.pem (1708 bytes)
	I1028 11:33:13.773674    3404 start.go:296] duration metric: took 5.3641002s for postStartSetup
	I1028 11:33:13.777321    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:33:16.097507    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:16.097507    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:16.098321    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:18.857782    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:18.858123    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:18.858381    3404 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json ...
	I1028 11:33:18.860884    3404 start.go:128] duration metric: took 2m14.6930397s to createHost
	I1028 11:33:18.861003    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:33:21.218571    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:21.218571    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:21.218571    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:24.007550    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:24.008397    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:24.014031    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:33:24.014621    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.254.230 22 <nil> <nil>}
	I1028 11:33:24.014732    3404 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:33:24.146166    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730115204.160216981
	
	I1028 11:33:24.146166    3404 fix.go:216] guest clock: 1730115204.160216981
	I1028 11:33:24.146166    3404 fix.go:229] Guest: 2024-10-28 11:33:24.160216981 +0000 UTC Remote: 2024-10-28 11:33:18.8610034 +0000 UTC m=+595.009783801 (delta=5.299213581s)
	I1028 11:33:24.146274    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:33:26.492581    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:26.493667    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:26.493667    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:29.198781    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:29.198781    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:29.206252    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:33:29.206870    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.254.230 22 <nil> <nil>}
	I1028 11:33:29.206870    3404 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1730115204
	I1028 11:33:29.342901    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 28 11:33:24 UTC 2024
	
	I1028 11:33:29.342901    3404 fix.go:236] clock set: Mon Oct 28 11:33:24 UTC 2024
	 (err=<nil>)
	I1028 11:33:29.342901    3404 start.go:83] releasing machines lock for "ha-201400-m03", held for 2m25.1753903s
	I1028 11:33:29.343199    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:33:31.658432    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:31.658727    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:31.658862    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:34.447452    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:34.447510    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:34.450031    3404 out.go:177] * Found network options:
	I1028 11:33:34.452942    3404 out.go:177]   - NO_PROXY=172.27.248.250,172.27.250.174
	W1028 11:33:34.455757    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:33:34.455757    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:33:34.458824    3404 out.go:177]   - NO_PROXY=172.27.248.250,172.27.250.174
	W1028 11:33:34.461441    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:33:34.461441    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:33:34.462809    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:33:34.462931    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:33:34.465524    3404 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1028 11:33:34.465721    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:33:34.475804    3404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 11:33:34.475804    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:33:36.934037    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:36.934216    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:36.934216    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:36.934216    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:36.934216    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:36.934216    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:39.690303    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:39.690303    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:39.690997    3404 sshutil.go:53] new ssh client: &{IP:172.27.254.230 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\id_rsa Username:docker}
	I1028 11:33:39.716116    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:39.716743    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:39.717180    3404 sshutil.go:53] new ssh client: &{IP:172.27.254.230 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\id_rsa Username:docker}
	I1028 11:33:39.785959    3404 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3100944s)
	W1028 11:33:39.785959    3404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:33:39.800048    3404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:33:39.804923    3404 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.3393389s)
	W1028 11:33:39.804923    3404 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1028 11:33:39.837547    3404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:33:39.837631    3404 start.go:495] detecting cgroup driver to use...
	I1028 11:33:39.838028    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:33:39.892216    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1028 11:33:39.925070    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1028 11:33:39.926067    3404 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1028 11:33:39.926067    3404 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1028 11:33:39.953734    3404 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 11:33:39.966683    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 11:33:40.013414    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:33:40.055969    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 11:33:40.095976    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:33:40.130789    3404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:33:40.164671    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 11:33:40.198078    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 11:33:40.233431    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 11:33:40.273347    3404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:33:40.295621    3404 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:33:40.307268    3404 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:33:40.340872    3404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:33:40.378111    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:33:40.596824    3404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 11:33:40.637766    3404 start.go:495] detecting cgroup driver to use...
	I1028 11:33:40.650319    3404 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 11:33:40.688745    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:33:40.723752    3404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:33:40.771046    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:33:40.808497    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 11:33:40.845526    3404 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1028 11:33:40.914069    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 11:33:40.940742    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:33:40.991970    3404 ssh_runner.go:195] Run: which cri-dockerd
	I1028 11:33:41.012187    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 11:33:41.033575    3404 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1028 11:33:41.086429    3404 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 11:33:41.298370    3404 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 11:33:41.493395    3404 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 11:33:41.493395    3404 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 11:33:41.541385    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:33:41.756572    3404 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 11:33:44.368538    3404 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6119366s)
	I1028 11:33:44.381312    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 11:33:44.419933    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 11:33:44.458482    3404 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 11:33:44.677491    3404 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 11:33:44.896287    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:33:45.114281    3404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 11:33:45.158661    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 11:33:45.196760    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:33:45.412812    3404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 11:33:45.536554    3404 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 11:33:45.548984    3404 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 11:33:45.557701    3404 start.go:563] Will wait 60s for crictl version
	I1028 11:33:45.572716    3404 ssh_runner.go:195] Run: which crictl
	I1028 11:33:45.590540    3404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:33:45.655302    3404 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1028 11:33:45.666715    3404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 11:33:45.712150    3404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 11:33:45.748269    3404 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1028 11:33:45.751257    3404 out.go:177]   - env NO_PROXY=172.27.248.250
	I1028 11:33:45.754256    3404 out.go:177]   - env NO_PROXY=172.27.248.250,172.27.250.174
	I1028 11:33:45.756312    3404 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1028 11:33:45.761261    3404 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1028 11:33:45.761261    3404 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1028 11:33:45.761261    3404 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1028 11:33:45.761261    3404 ip.go:211] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:23:7f:d2 Flags:up|broadcast|multicast|running}
	I1028 11:33:45.764324    3404 ip.go:214] interface addr: fe80::866e:dfb9:193a:741e/64
	I1028 11:33:45.764324    3404 ip.go:214] interface addr: 172.27.240.1/20
	I1028 11:33:45.775302    3404 ssh_runner.go:195] Run: grep 172.27.240.1	host.minikube.internal$ /etc/hosts
	I1028 11:33:45.782770    3404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:33:45.806556    3404 mustload.go:65] Loading cluster: ha-201400
	I1028 11:33:45.807319    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:33:45.807868    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:33:48.100577    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:48.100635    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:48.100635    3404 host.go:66] Checking if "ha-201400" exists ...
	I1028 11:33:48.101239    3404 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400 for IP: 172.27.254.230
	I1028 11:33:48.101239    3404 certs.go:194] generating shared ca certs ...
	I1028 11:33:48.101239    3404 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:33:48.101847    3404 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I1028 11:33:48.101847    3404 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I1028 11:33:48.102543    3404 certs.go:256] generating profile certs ...
	I1028 11:33:48.103163    3404 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\client.key
	I1028 11:33:48.103393    3404 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.1a357288
	I1028 11:33:48.103393    3404 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.1a357288 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.248.250 172.27.250.174 172.27.254.230 172.27.255.254]
	I1028 11:33:48.237615    3404 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.1a357288 ...
	I1028 11:33:48.237615    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.1a357288: {Name:mkc46df1f9e0c76e7c9cb770a4a5c629941349cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:33:48.239446    3404 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.1a357288 ...
	I1028 11:33:48.239446    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.1a357288: {Name:mk5457568e279a9532b182a66e070be2b509e809 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:33:48.239893    3404 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.1a357288 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt
	I1028 11:33:48.256003    3404 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.1a357288 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key
	I1028 11:33:48.257480    3404 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key
	I1028 11:33:48.257480    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:33:48.257735    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:33:48.257735    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:33:48.257735    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:33:48.257735    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:33:48.258326    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:33:48.258438    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:33:48.258438    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:33:48.259073    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem (1338 bytes)
	W1028 11:33:48.259101    3404 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608_empty.pem, impossibly tiny 0 bytes
	I1028 11:33:48.259101    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1028 11:33:48.259838    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1028 11:33:48.259838    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1028 11:33:48.259838    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1028 11:33:48.260602    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem (1708 bytes)
	I1028 11:33:48.260602    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:33:48.261292    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem -> /usr/share/ca-certificates/9608.pem
	I1028 11:33:48.261292    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /usr/share/ca-certificates/96082.pem
	I1028 11:33:48.261292    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:33:50.582755    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:50.582755    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:50.582755    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:53.355928    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:33:53.355984    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:53.355984    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:33:53.450105    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:33:53.458269    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:33:53.492453    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:33:53.502207    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 11:33:53.537891    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:33:53.544824    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:33:53.579704    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:33:53.586100    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:33:53.619050    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:33:53.628633    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:33:53.665424    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:33:53.672731    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:33:53.694377    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:33:53.745142    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:33:53.796384    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:33:53.845752    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:33:53.895212    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1028 11:33:53.945245    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:33:53.994300    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:33:54.050528    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:33:54.106771    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:33:54.157919    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem --> /usr/share/ca-certificates/9608.pem (1338 bytes)
	I1028 11:33:54.207862    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /usr/share/ca-certificates/96082.pem (1708 bytes)
	I1028 11:33:54.257434    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:33:54.290143    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 11:33:54.324751    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:33:54.359925    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:33:54.394481    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:33:54.430621    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:33:54.467028    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:33:54.515503    3404 ssh_runner.go:195] Run: openssl version
	I1028 11:33:54.537847    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:33:54.575680    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:33:54.585089    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:33:54.597334    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:33:54.619521    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:33:54.654899    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9608.pem && ln -fs /usr/share/ca-certificates/9608.pem /etc/ssl/certs/9608.pem"
	I1028 11:33:54.688027    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9608.pem
	I1028 11:33:54.695905    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:03 /usr/share/ca-certificates/9608.pem
	I1028 11:33:54.709155    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9608.pem
	I1028 11:33:54.730855    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9608.pem /etc/ssl/certs/51391683.0"
	I1028 11:33:54.764127    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96082.pem && ln -fs /usr/share/ca-certificates/96082.pem /etc/ssl/certs/96082.pem"
	I1028 11:33:54.798539    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96082.pem
	I1028 11:33:54.805935    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:03 /usr/share/ca-certificates/96082.pem
	I1028 11:33:54.819515    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96082.pem
	I1028 11:33:54.840084    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96082.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:33:54.870713    3404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:33:54.877062    3404 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:33:54.877362    3404 kubeadm.go:934] updating node {m03 172.27.254.230 8443 v1.31.2 docker true true} ...
	I1028 11:33:54.877607    3404 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-201400-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.254.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:33:54.877607    3404 kube-vip.go:115] generating kube-vip config ...
	I1028 11:33:54.891233    3404 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:33:54.919058    3404 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:33:54.919250    3404 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:33:54.930646    3404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:33:54.948760    3404 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:33:54.959651    3404 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:33:54.982077    3404 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1028 11:33:54.982262    3404 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1028 11:33:54.982371    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:33:54.982077    3404 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:33:54.982650    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:33:54.996719    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:33:54.996719    3404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:33:54.998584    3404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:33:55.024045    3404 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:33:55.024045    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:33:55.024045    3404 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:33:55.024045    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:33:55.024045    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:33:55.041817    3404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:33:55.106434    3404 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:33:55.106497    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:33:56.375413    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:33:56.396302    3404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1028 11:33:56.431359    3404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:33:56.466594    3404 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:33:56.518086    3404 ssh_runner.go:195] Run: grep 172.27.255.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:33:56.526079    3404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:33:56.563493    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:33:56.776882    3404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:33:56.813597    3404 host.go:66] Checking if "ha-201400" exists ...
	I1028 11:33:56.813859    3404 start.go:317] joinCluster: &{Name:ha-201400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.248.250 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.250.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.27.254.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:33:56.814651    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:33:56.814721    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:33:59.109066    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:59.109066    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:59.110037    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:34:01.858253    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:34:01.859150    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:34:01.859303    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:34:02.079540    3404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2647488s)
	I1028 11:34:02.079955    3404 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.27.254.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:34:02.080193    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qo4296.00kz1cadrxef2kx2 --discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-201400-m03 --control-plane --apiserver-advertise-address=172.27.254.230 --apiserver-bind-port=8443"
	I1028 11:34:49.236475    3404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qo4296.00kz1cadrxef2kx2 --discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-201400-m03 --control-plane --apiserver-advertise-address=172.27.254.230 --apiserver-bind-port=8443": (47.1557509s)
	I1028 11:34:49.237329    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:34:50.027959    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-201400-m03 minikube.k8s.io/updated_at=2024_10_28T11_34_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-201400 minikube.k8s.io/primary=false
	I1028 11:34:50.251828    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-201400-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:34:50.555605    3404 start.go:319] duration metric: took 53.7411393s to joinCluster
	I1028 11:34:50.555605    3404 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.27.254.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:34:50.557111    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:34:50.563179    3404 out.go:177] * Verifying Kubernetes components...
	I1028 11:34:50.578099    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:34:50.981726    3404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:34:51.035312    3404 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 11:34:51.036250    3404 kapi.go:59] client config for ha-201400: &rest.Config{Host:"https://172.27.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-201400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-201400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29cb3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:34:51.036462    3404 kubeadm.go:483] Overriding stale ClientConfig host https://172.27.255.254:8443 with https://172.27.248.250:8443
	I1028 11:34:51.037392    3404 node_ready.go:35] waiting up to 6m0s for node "ha-201400-m03" to be "Ready" ...
	I1028 11:34:51.037598    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:51.037698    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:51.037728    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:51.037728    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:51.055566    3404 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1028 11:34:51.537885    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:51.537885    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:51.537885    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:51.537885    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:51.545795    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:34:52.038169    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:52.038169    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:52.038169    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:52.038169    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:52.044629    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:34:52.537641    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:52.537641    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:52.537641    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:52.537641    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:52.543524    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:34:53.037673    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:53.037673    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:53.037673    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:53.037673    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:53.050242    3404 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1028 11:34:53.051320    3404 node_ready.go:53] node "ha-201400-m03" has status "Ready":"False"
	I1028 11:34:53.538424    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:53.538424    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:53.538424    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:53.538424    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:53.543934    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:34:54.038641    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:54.038641    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:54.038641    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:54.038641    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:54.218651    3404 round_trippers.go:574] Response Status: 200 OK in 179 milliseconds
	I1028 11:34:54.538507    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:54.538507    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:54.538507    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:54.538507    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:54.544637    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:34:55.039565    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:55.039606    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:55.039606    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:55.039606    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:55.050208    3404 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1028 11:34:55.538548    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:55.538548    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:55.538548    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:55.538548    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:55.567243    3404 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I1028 11:34:55.568924    3404 node_ready.go:53] node "ha-201400-m03" has status "Ready":"False"
	I1028 11:34:56.038450    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:56.038808    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:56.038808    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:56.038808    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:56.047279    3404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 11:34:56.538275    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:56.538275    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:56.538275    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:56.538409    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:56.543956    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:34:57.037935    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:57.037935    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:57.037935    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:57.037935    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:57.047668    3404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:34:57.538351    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:57.538438    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:57.538438    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:57.538503    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:57.544549    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:34:58.037604    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:58.037604    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:58.037604    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:58.037604    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:58.043811    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:34:58.044727    3404 node_ready.go:53] node "ha-201400-m03" has status "Ready":"False"
	I1028 11:34:58.537808    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:58.537808    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:58.537808    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:58.537808    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:58.543650    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:34:59.038120    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:59.038120    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:59.038120    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:59.038120    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:59.046121    3404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 11:34:59.538850    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:59.538850    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:59.538850    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:59.538850    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:59.546909    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:35:00.041250    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:00.041250    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:00.041337    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:00.041337    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:00.046024    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:00.046947    3404 node_ready.go:53] node "ha-201400-m03" has status "Ready":"False"
	I1028 11:35:00.539302    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:00.539302    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:00.539404    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:00.539404    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:00.546092    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:01.039808    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:01.040034    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:01.040034    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:01.040034    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:01.046757    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:01.538816    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:01.538816    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:01.538816    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:01.538816    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:01.544407    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:02.038620    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:02.038620    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:02.038620    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:02.038620    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:02.045697    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:35:02.538146    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:02.538146    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:02.538146    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:02.538146    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:02.556496    3404 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1028 11:35:02.557356    3404 node_ready.go:53] node "ha-201400-m03" has status "Ready":"False"
	I1028 11:35:03.038432    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:03.038432    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:03.038432    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:03.038432    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:03.043911    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:03.538734    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:03.538734    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:03.538734    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:03.538734    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:03.545555    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:04.039667    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:04.039667    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:04.039667    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:04.039667    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:04.057570    3404 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1028 11:35:04.538289    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:04.538289    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:04.538379    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:04.538379    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:04.542398    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:05.038857    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:05.038857    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:05.038857    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:05.038857    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:05.045125    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:05.045793    3404 node_ready.go:53] node "ha-201400-m03" has status "Ready":"False"
	I1028 11:35:05.538209    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:05.538209    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:05.538209    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:05.538209    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:05.546733    3404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 11:35:06.042808    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:06.042897    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:06.042897    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:06.042897    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:06.047547    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:06.538969    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:06.538969    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:06.538969    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:06.538969    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:06.544123    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:07.037842    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:07.037842    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:07.037842    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:07.037842    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:07.043618    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:07.538806    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:07.538906    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:07.538906    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:07.538906    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:07.543478    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:07.544943    3404 node_ready.go:53] node "ha-201400-m03" has status "Ready":"False"
	I1028 11:35:08.038469    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:08.038469    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:08.038469    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:08.038469    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:08.045786    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:35:08.538420    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:08.538568    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:08.538568    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:08.538568    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:08.544099    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:09.038231    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:09.038231    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:09.038231    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:09.038231    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:09.044519    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:09.538331    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:09.538331    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:09.538331    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:09.538331    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:09.544804    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:09.545386    3404 node_ready.go:53] node "ha-201400-m03" has status "Ready":"False"
	I1028 11:35:10.039186    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:10.039299    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:10.039299    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:10.039299    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:10.044793    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:10.538738    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:10.538886    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:10.538886    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:10.538886    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:10.544575    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:11.038670    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:11.038771    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.038771    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.038771    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.044854    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:11.045489    3404 node_ready.go:49] node "ha-201400-m03" has status "Ready":"True"
	I1028 11:35:11.045546    3404 node_ready.go:38] duration metric: took 20.0078432s for node "ha-201400-m03" to be "Ready" ...
	I1028 11:35:11.045546    3404 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:35:11.045711    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:35:11.045711    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.045781    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.045781    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.060109    3404 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1028 11:35:11.071132    3404 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n2qnf" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.071132    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-n2qnf
	I1028 11:35:11.071132    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.071132    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.071132    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.075846    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:11.076219    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:11.076219    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.076219    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.076219    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.082440    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:11.083777    3404 pod_ready.go:93] pod "coredns-7c65d6cfc9-n2qnf" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:11.083902    3404 pod_ready.go:82] duration metric: took 12.7699ms for pod "coredns-7c65d6cfc9-n2qnf" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.083902    3404 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zt6f6" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.084065    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zt6f6
	I1028 11:35:11.084065    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.084065    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.084065    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.089992    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:11.091073    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:11.091189    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.091189    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.091189    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.095483    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:11.096840    3404 pod_ready.go:93] pod "coredns-7c65d6cfc9-zt6f6" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:11.096840    3404 pod_ready.go:82] duration metric: took 12.9377ms for pod "coredns-7c65d6cfc9-zt6f6" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.096903    3404 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.096979    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-201400
	I1028 11:35:11.096979    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.096979    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.096979    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.100391    3404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:35:11.101392    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:11.101392    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.101392    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.101392    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.105322    3404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:35:11.106320    3404 pod_ready.go:93] pod "etcd-ha-201400" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:11.106320    3404 pod_ready.go:82] duration metric: took 9.417ms for pod "etcd-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.106320    3404 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.106320    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-201400-m02
	I1028 11:35:11.106320    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.106320    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.106320    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.110525    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:11.111517    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:11.111517    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.111517    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.111517    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.115812    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:11.117574    3404 pod_ready.go:93] pod "etcd-ha-201400-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:11.117632    3404 pod_ready.go:82] duration metric: took 11.3109ms for pod "etcd-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.117682    3404 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-201400-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.238978    3404 request.go:632] Waited for 121.2948ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-201400-m03
	I1028 11:35:11.238978    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-201400-m03
	I1028 11:35:11.238978    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.238978    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.238978    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.245636    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:11.438588    3404 request.go:632] Waited for 192.043ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:11.438588    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:11.438588    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.438588    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.438588    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.444290    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:11.444928    3404 pod_ready.go:93] pod "etcd-ha-201400-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:11.444987    3404 pod_ready.go:82] duration metric: took 327.3011ms for pod "etcd-ha-201400-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.444987    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.638590    3404 request.go:632] Waited for 193.5452ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400
	I1028 11:35:11.638590    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400
	I1028 11:35:11.638590    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.639038    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.639038    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.644857    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:11.838687    3404 request.go:632] Waited for 193.7661ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:11.838972    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:11.838972    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.838972    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.838972    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.845130    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:11.845688    3404 pod_ready.go:93] pod "kube-apiserver-ha-201400" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:11.845738    3404 pod_ready.go:82] duration metric: took 400.7464ms for pod "kube-apiserver-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.845738    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:12.039295    3404 request.go:632] Waited for 193.5552ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400-m02
	I1028 11:35:12.039295    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400-m02
	I1028 11:35:12.039295    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:12.039295    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:12.039295    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:12.045989    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:12.238767    3404 request.go:632] Waited for 191.4366ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:12.238767    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:12.238767    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:12.238767    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:12.238767    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:12.244867    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:12.245532    3404 pod_ready.go:93] pod "kube-apiserver-ha-201400-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:12.245532    3404 pod_ready.go:82] duration metric: took 399.7897ms for pod "kube-apiserver-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:12.245532    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-201400-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:12.439152    3404 request.go:632] Waited for 193.513ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400-m03
	I1028 11:35:12.439152    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400-m03
	I1028 11:35:12.439152    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:12.439152    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:12.439152    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:12.445162    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:12.638476    3404 request.go:632] Waited for 192.2204ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:12.638476    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:12.638476    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:12.638476    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:12.638476    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:12.644225    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:12.644852    3404 pod_ready.go:93] pod "kube-apiserver-ha-201400-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:12.644922    3404 pod_ready.go:82] duration metric: took 399.3861ms for pod "kube-apiserver-ha-201400-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:12.644922    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:12.838668    3404 request.go:632] Waited for 193.6148ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400
	I1028 11:35:12.838668    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400
	I1028 11:35:12.838668    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:12.838668    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:12.838668    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:12.847614    3404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 11:35:13.039753    3404 request.go:632] Waited for 190.8875ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:13.040201    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:13.040257    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:13.040257    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:13.040257    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:13.046885    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:13.047796    3404 pod_ready.go:93] pod "kube-controller-manager-ha-201400" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:13.047796    3404 pod_ready.go:82] duration metric: took 402.8686ms for pod "kube-controller-manager-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:13.047796    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:13.238781    3404 request.go:632] Waited for 190.8336ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400-m02
	I1028 11:35:13.239204    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400-m02
	I1028 11:35:13.239204    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:13.239204    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:13.239204    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:13.245683    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:13.438641    3404 request.go:632] Waited for 192.1838ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:13.439177    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:13.439251    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:13.439251    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:13.439251    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:13.445382    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:13.445916    3404 pod_ready.go:93] pod "kube-controller-manager-ha-201400-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:13.446082    3404 pod_ready.go:82] duration metric: took 398.2166ms for pod "kube-controller-manager-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:13.446082    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-201400-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:13.638606    3404 request.go:632] Waited for 192.5215ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400-m03
	I1028 11:35:13.639028    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400-m03
	I1028 11:35:13.639028    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:13.639028    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:13.639028    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:13.644958    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:13.839297    3404 request.go:632] Waited for 193.2724ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:13.839297    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:13.839297    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:13.839297    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:13.839297    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:13.845337    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:13.846070    3404 pod_ready.go:93] pod "kube-controller-manager-ha-201400-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:13.846070    3404 pod_ready.go:82] duration metric: took 399.9833ms for pod "kube-controller-manager-ha-201400-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:13.846174    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fg4c7" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:14.038769    3404 request.go:632] Waited for 192.5926ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fg4c7
	I1028 11:35:14.039075    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fg4c7
	I1028 11:35:14.039075    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:14.039075    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:14.039075    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:14.044408    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:14.238600    3404 request.go:632] Waited for 192.1111ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:14.238600    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:14.239171    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:14.239171    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:14.239171    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:14.245391    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:14.246032    3404 pod_ready.go:93] pod "kube-proxy-fg4c7" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:14.246032    3404 pod_ready.go:82] duration metric: took 399.8534ms for pod "kube-proxy-fg4c7" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:14.246032    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hkdzx" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:14.438891    3404 request.go:632] Waited for 192.8564ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkdzx
	I1028 11:35:14.439159    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkdzx
	I1028 11:35:14.439159    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:14.439159    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:14.439159    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:14.445836    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:14.639064    3404 request.go:632] Waited for 192.7643ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:14.639525    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:14.639632    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:14.639632    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:14.639698    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:14.647950    3404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 11:35:14.648744    3404 pod_ready.go:93] pod "kube-proxy-hkdzx" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:14.648744    3404 pod_ready.go:82] duration metric: took 402.7072ms for pod "kube-proxy-hkdzx" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:14.648744    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rn4tk" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:14.838860    3404 request.go:632] Waited for 190.1144ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rn4tk
	I1028 11:35:14.839211    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rn4tk
	I1028 11:35:14.839211    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:14.839211    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:14.839211    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:14.844175    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:15.039410    3404 request.go:632] Waited for 194.1289ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:15.039809    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:15.039809    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:15.039809    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:15.039809    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:15.046859    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:35:15.047493    3404 pod_ready.go:93] pod "kube-proxy-rn4tk" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:15.047553    3404 pod_ready.go:82] duration metric: took 398.8048ms for pod "kube-proxy-rn4tk" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:15.047611    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:15.240202    3404 request.go:632] Waited for 192.5306ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400
	I1028 11:35:15.240202    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400
	I1028 11:35:15.240202    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:15.240810    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:15.240810    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:15.247384    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:15.438690    3404 request.go:632] Waited for 190.3211ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:15.439105    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:15.439105    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:15.439105    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:15.439105    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:15.448552    3404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:35:15.450002    3404 pod_ready.go:93] pod "kube-scheduler-ha-201400" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:15.450061    3404 pod_ready.go:82] duration metric: took 402.4451ms for pod "kube-scheduler-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:15.450119    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:15.638593    3404 request.go:632] Waited for 188.4119ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400-m02
	I1028 11:35:15.638593    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400-m02
	I1028 11:35:15.638593    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:15.638593    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:15.638593    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:15.642738    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:15.839292    3404 request.go:632] Waited for 194.3681ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:15.839686    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:15.839686    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:15.839686    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:15.839686    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:15.845499    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:15.846530    3404 pod_ready.go:93] pod "kube-scheduler-ha-201400-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:15.846530    3404 pod_ready.go:82] duration metric: took 396.4069ms for pod "kube-scheduler-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:15.846530    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-201400-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:16.039364    3404 request.go:632] Waited for 192.8318ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400-m03
	I1028 11:35:16.039364    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400-m03
	I1028 11:35:16.039364    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:16.039364    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:16.039364    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:16.045609    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:16.238515    3404 request.go:632] Waited for 191.5595ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:16.238515    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:16.238515    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:16.238515    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:16.238515    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:16.245885    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:35:16.246886    3404 pod_ready.go:93] pod "kube-scheduler-ha-201400-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:16.246945    3404 pod_ready.go:82] duration metric: took 400.3511ms for pod "kube-scheduler-ha-201400-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:16.246945    3404 pod_ready.go:39] duration metric: took 5.2013408s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:35:16.247004    3404 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:35:16.257998    3404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:35:16.290742    3404 api_server.go:72] duration metric: took 25.7342916s to wait for apiserver process to appear ...
	I1028 11:35:16.290804    3404 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:35:16.290804    3404 api_server.go:253] Checking apiserver healthz at https://172.27.248.250:8443/healthz ...
	I1028 11:35:16.301461    3404 api_server.go:279] https://172.27.248.250:8443/healthz returned 200:
	ok
	I1028 11:35:16.301461    3404 round_trippers.go:463] GET https://172.27.248.250:8443/version
	I1028 11:35:16.301461    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:16.301461    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:16.301461    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:16.303925    3404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:35:16.303925    3404 api_server.go:141] control plane version: v1.31.2
	I1028 11:35:16.303925    3404 api_server.go:131] duration metric: took 13.1207ms to wait for apiserver health ...
	I1028 11:35:16.303925    3404 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:35:16.439254    3404 request.go:632] Waited for 135.3279ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:35:16.439254    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:35:16.439254    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:16.439254    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:16.439254    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:16.451407    3404 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1028 11:35:16.464550    3404 system_pods.go:59] 24 kube-system pods found
	I1028 11:35:16.464550    3404 system_pods.go:61] "coredns-7c65d6cfc9-n2qnf" [0a59b0c9-3860-43bb-9a01-6de060a0d8ec] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "coredns-7c65d6cfc9-zt6f6" [5670c78d-aab6-44c3-8f4a-09c21d9844fb] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "etcd-ha-201400" [54f8addb-a080-4679-b37b-561992049222] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "etcd-ha-201400-m02" [5928b40b-449f-4535-93ca-ecdcc4aad10c] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "etcd-ha-201400-m03" [b9057ad6-62aa-4b43-845a-bbf864d71066] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kindnet-5xvlb" [3561e5ab-664f-4377-ab6a-287cd5f68d85] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kindnet-cwkwx" [dee24e8e-11df-4371-bcc4-036802ed78f7] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kindnet-d99h6" [46efe528-d003-4a59-b6a3-f76548c4c236] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-apiserver-ha-201400" [b77b449a-eea7-4ac8-a482-01d4405aaff1] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-apiserver-ha-201400-m02" [f1405ed9-f514-4db2-acb3-f958193d27d4] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-apiserver-ha-201400-m03" [c4b4e094-2ef6-44b6-90a1-9ec79e7f83f1] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-controller-manager-ha-201400" [bbfa9c31-e00f-4f74-97d0-d2b617a99bda] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-controller-manager-ha-201400-m02" [bf76a2dc-b766-4642-92e4-15817852695d] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-controller-manager-ha-201400-m03" [544cf071-e35d-42c9-bc3e-bcc74426e10a] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-proxy-fg4c7" [a66dd88b-ed1d-4de7-b8cd-fec1b0e3b7c4] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-proxy-hkdzx" [9c126bc0-db8a-442f-8e5a-a3cff3771d84] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-proxy-rn4tk" [b39a95c7-89e2-4c00-8506-3de2d9c161be] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-scheduler-ha-201400" [942de188-c1af-4430-b6d5-0eada56b52a7] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-scheduler-ha-201400-m02" [fcf968aa-fd96-4530-8f89-09d508d9a0a4] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-scheduler-ha-201400-m03" [e9723214-ff30-45ae-8572-80c03b363255] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-vip-ha-201400" [01349bec-7565-423f-b7a3-f4901c3aac8c] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-vip-ha-201400-m02" [b9349828-0d42-41c9-a291-1147a6c5e426] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-vip-ha-201400-m03" [3f7e58ed-ce82-4278-989c-2aab7e02b15f] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "storage-provisioner" [3610335c-bd06-440a-b518-cf74dd5af220] Running
	I1028 11:35:16.464550    3404 system_pods.go:74] duration metric: took 160.6231ms to wait for pod list to return data ...
	I1028 11:35:16.465237    3404 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:35:16.639107    3404 request.go:632] Waited for 173.7499ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:35:16.639107    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:35:16.639107    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:16.639107    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:16.639314    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:16.644479    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:16.644479    3404 default_sa.go:45] found service account: "default"
	I1028 11:35:16.644479    3404 default_sa.go:55] duration metric: took 179.2401ms for default service account to be created ...
	I1028 11:35:16.644479    3404 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:35:16.839028    3404 request.go:632] Waited for 194.5475ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:35:16.839642    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:35:16.839642    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:16.839642    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:16.839642    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:16.849856    3404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:35:16.859972    3404 system_pods.go:86] 24 kube-system pods found
	I1028 11:35:16.859972    3404 system_pods.go:89] "coredns-7c65d6cfc9-n2qnf" [0a59b0c9-3860-43bb-9a01-6de060a0d8ec] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "coredns-7c65d6cfc9-zt6f6" [5670c78d-aab6-44c3-8f4a-09c21d9844fb] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "etcd-ha-201400" [54f8addb-a080-4679-b37b-561992049222] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "etcd-ha-201400-m02" [5928b40b-449f-4535-93ca-ecdcc4aad10c] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "etcd-ha-201400-m03" [b9057ad6-62aa-4b43-845a-bbf864d71066] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kindnet-5xvlb" [3561e5ab-664f-4377-ab6a-287cd5f68d85] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kindnet-cwkwx" [dee24e8e-11df-4371-bcc4-036802ed78f7] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kindnet-d99h6" [46efe528-d003-4a59-b6a3-f76548c4c236] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kube-apiserver-ha-201400" [b77b449a-eea7-4ac8-a482-01d4405aaff1] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kube-apiserver-ha-201400-m02" [f1405ed9-f514-4db2-acb3-f958193d27d4] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kube-apiserver-ha-201400-m03" [c4b4e094-2ef6-44b6-90a1-9ec79e7f83f1] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kube-controller-manager-ha-201400" [bbfa9c31-e00f-4f74-97d0-d2b617a99bda] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kube-controller-manager-ha-201400-m02" [bf76a2dc-b766-4642-92e4-15817852695d] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kube-controller-manager-ha-201400-m03" [544cf071-e35d-42c9-bc3e-bcc74426e10a] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kube-proxy-fg4c7" [a66dd88b-ed1d-4de7-b8cd-fec1b0e3b7c4] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kube-proxy-hkdzx" [9c126bc0-db8a-442f-8e5a-a3cff3771d84] Running
	I1028 11:35:16.860504    3404 system_pods.go:89] "kube-proxy-rn4tk" [b39a95c7-89e2-4c00-8506-3de2d9c161be] Running
	I1028 11:35:16.860504    3404 system_pods.go:89] "kube-scheduler-ha-201400" [942de188-c1af-4430-b6d5-0eada56b52a7] Running
	I1028 11:35:16.860504    3404 system_pods.go:89] "kube-scheduler-ha-201400-m02" [fcf968aa-fd96-4530-8f89-09d508d9a0a4] Running
	I1028 11:35:16.860504    3404 system_pods.go:89] "kube-scheduler-ha-201400-m03" [e9723214-ff30-45ae-8572-80c03b363255] Running
	I1028 11:35:16.860535    3404 system_pods.go:89] "kube-vip-ha-201400" [01349bec-7565-423f-b7a3-f4901c3aac8c] Running
	I1028 11:35:16.860535    3404 system_pods.go:89] "kube-vip-ha-201400-m02" [b9349828-0d42-41c9-a291-1147a6c5e426] Running
	I1028 11:35:16.860535    3404 system_pods.go:89] "kube-vip-ha-201400-m03" [3f7e58ed-ce82-4278-989c-2aab7e02b15f] Running
	I1028 11:35:16.860535    3404 system_pods.go:89] "storage-provisioner" [3610335c-bd06-440a-b518-cf74dd5af220] Running
	I1028 11:35:16.860535    3404 system_pods.go:126] duration metric: took 216.0542ms to wait for k8s-apps to be running ...
	I1028 11:35:16.860535    3404 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:35:16.871318    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:35:16.901278    3404 system_svc.go:56] duration metric: took 40.742ms WaitForService to wait for kubelet
	I1028 11:35:16.901278    3404 kubeadm.go:582] duration metric: took 26.3448209s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:35:16.901278    3404 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:35:17.042067    3404 request.go:632] Waited for 140.6084ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes
	I1028 11:35:17.042067    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes
	I1028 11:35:17.042067    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:17.042067    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:17.042067    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:17.049884    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:35:17.051378    3404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:35:17.051378    3404 node_conditions.go:123] node cpu capacity is 2
	I1028 11:35:17.051378    3404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:35:17.051378    3404 node_conditions.go:123] node cpu capacity is 2
	I1028 11:35:17.051378    3404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:35:17.051378    3404 node_conditions.go:123] node cpu capacity is 2
	I1028 11:35:17.051378    3404 node_conditions.go:105] duration metric: took 149.9193ms to run NodePressure ...
	I1028 11:35:17.051378    3404 start.go:241] waiting for startup goroutines ...
	I1028 11:35:17.051554    3404 start.go:255] writing updated cluster config ...
	I1028 11:35:17.064594    3404 ssh_runner.go:195] Run: rm -f paused
	I1028 11:35:17.222554    3404 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 11:35:17.227166    3404 out.go:177] * Done! kubectl is now configured to use "ha-201400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 28 11:27:08 ha-201400 cri-dockerd[1329]: time="2024-10-28T11:27:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/49c04a2d5e21812b2c7e82476fb91f9b76c877eeca25c4e66382aa63b56e502b/resolv.conf as [nameserver 172.27.240.1]"
	Oct 28 11:27:08 ha-201400 cri-dockerd[1329]: time="2024-10-28T11:27:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eb2aa4b2b548eef445230cf2c3a200766113aeb266ecc8cf69faaa49088039ce/resolv.conf as [nameserver 172.27.240.1]"
	Oct 28 11:27:08 ha-201400 cri-dockerd[1329]: time="2024-10-28T11:27:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2dbff880c5c17f28b4eec93c33d392f3dba70e66dc941a97f0942d10a0cb1e19/resolv.conf as [nameserver 172.27.240.1]"
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.530587825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.530670626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.530690326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.531018130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.536958698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.537189900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.537236501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.537965909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.574531628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.574698829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.574775530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.575774742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:35:58 ha-201400 dockerd[1437]: time="2024-10-28T11:35:58.145206068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 11:35:58 ha-201400 dockerd[1437]: time="2024-10-28T11:35:58.145336870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 11:35:58 ha-201400 dockerd[1437]: time="2024-10-28T11:35:58.145355870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:35:58 ha-201400 dockerd[1437]: time="2024-10-28T11:35:58.145716676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:35:58 ha-201400 cri-dockerd[1329]: time="2024-10-28T11:35:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1b312bd0d3e3e3de763a1951c21f9ab365e129d50fb50ed7e88db6c55a29fffb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 28 11:35:59 ha-201400 cri-dockerd[1329]: time="2024-10-28T11:35:59Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 28 11:36:00 ha-201400 dockerd[1437]: time="2024-10-28T11:36:00.079607066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 11:36:00 ha-201400 dockerd[1437]: time="2024-10-28T11:36:00.079705667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 11:36:00 ha-201400 dockerd[1437]: time="2024-10-28T11:36:00.079764768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:36:00 ha-201400 dockerd[1437]: time="2024-10-28T11:36:00.080064371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	209e04121e9c7       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   1b312bd0d3e3e       busybox-7dff88458-gp9fd
	ce3d7e9066412       c69fa2e9cbf5f                                                                                         9 minutes ago        Running             coredns                   0                   eb2aa4b2b548e       coredns-7c65d6cfc9-n2qnf
	64d978358caa1       c69fa2e9cbf5f                                                                                         9 minutes ago        Running             coredns                   0                   49c04a2d5e218       coredns-7c65d6cfc9-zt6f6
	b639363d7d172       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   2dbff880c5c17       storage-provisioner
	7f47c99a1a2a9       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              10 minutes ago       Running             kindnet-cni               0                   363f5ef872145       kindnet-d99h6
	9f51b5bae8691       505d571f5fd56                                                                                         10 minutes ago       Running             kube-proxy                0                   e90af1cf3bbde       kube-proxy-fg4c7
	ab049c2140bcb       ghcr.io/kube-vip/kube-vip@sha256:b5049ac9e9e750783c32c69b88c48f7b0efb6b23f94f656471d5f82222fe1b72     10 minutes ago       Running             kube-vip                  0                   4f8837814079a       kube-vip-ha-201400
	afe94cc393c22       847c7bc1a5418                                                                                         10 minutes ago       Running             kube-scheduler            0                   fe63d450fb737       kube-scheduler-ha-201400
	fa49f1d4e69ac       9499c9960544e                                                                                         10 minutes ago       Running             kube-apiserver            0                   0ec9b0145aa57       kube-apiserver-ha-201400
	c2bfb2f1e6510       2e96e5913fc06                                                                                         10 minutes ago       Running             etcd                      0                   11a6643cdc967       etcd-ha-201400
	d70ee194fe7fd       0486b6c53a1b5                                                                                         10 minutes ago       Running             kube-controller-manager   0                   f8e1bb9eda406       kube-controller-manager-ha-201400
	
	
	==> coredns [64d978358caa] <==
	[INFO] 10.244.1.2:42364 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000074701s
	[INFO] 10.244.0.4:39059 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000219603s
	[INFO] 10.244.3.2:41051 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000223203s
	[INFO] 10.244.3.2:52465 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.060785981s
	[INFO] 10.244.3.2:47473 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000179802s
	[INFO] 10.244.3.2:37784 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01243484s
	[INFO] 10.244.3.2:60128 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131002s
	[INFO] 10.244.1.2:58405 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000678907s
	[INFO] 10.244.1.2:41270 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000208202s
	[INFO] 10.244.0.4:55035 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000137402s
	[INFO] 10.244.0.4:41846 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000191502s
	[INFO] 10.244.0.4:57771 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000345904s
	[INFO] 10.244.3.2:52220 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000208603s
	[INFO] 10.244.1.2:36760 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116701s
	[INFO] 10.244.1.2:42206 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173101s
	[INFO] 10.244.1.2:38287 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147901s
	[INFO] 10.244.0.4:58812 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000302003s
	[INFO] 10.244.0.4:37201 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000241703s
	[INFO] 10.244.0.4:46594 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000281503s
	[INFO] 10.244.3.2:38659 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000945511s
	[INFO] 10.244.3.2:35862 - 5 "PTR IN 1.240.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186602s
	[INFO] 10.244.1.2:52364 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000277003s
	[INFO] 10.244.0.4:43333 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175902s
	[INFO] 10.244.0.4:55448 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102801s
	[INFO] 10.244.0.4:35819 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000076401s
	
	
	==> coredns [ce3d7e906641] <==
	[INFO] 10.244.3.2:48251 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000271503s
	[INFO] 10.244.3.2:38266 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000516706s
	[INFO] 10.244.3.2:60132 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223902s
	[INFO] 10.244.1.2:46194 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363404s
	[INFO] 10.244.1.2:41842 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.015410673s
	[INFO] 10.244.1.2:47891 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155402s
	[INFO] 10.244.1.2:33575 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000335703s
	[INFO] 10.244.1.2:40207 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000146002s
	[INFO] 10.244.1.2:57094 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071201s
	[INFO] 10.244.0.4:41269 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000333604s
	[INFO] 10.244.0.4:32903 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000070601s
	[INFO] 10.244.0.4:42397 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000201703s
	[INFO] 10.244.0.4:37058 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.031470952s
	[INFO] 10.244.0.4:47788 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094801s
	[INFO] 10.244.3.2:49058 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126501s
	[INFO] 10.244.3.2:39030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000388204s
	[INFO] 10.244.3.2:56997 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172602s
	[INFO] 10.244.1.2:45147 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000236303s
	[INFO] 10.244.0.4:53698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000471306s
	[INFO] 10.244.3.2:45832 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000226402s
	[INFO] 10.244.3.2:44628 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000317503s
	[INFO] 10.244.1.2:35552 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207202s
	[INFO] 10.244.1.2:35517 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000276603s
	[INFO] 10.244.1.2:32969 - 5 "PTR IN 1.240.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000203702s
	[INFO] 10.244.0.4:40599 - 5 "PTR IN 1.240.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000402805s
	
	
	==> describe nodes <==
	Name:               ha-201400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-201400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-201400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T11_26_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:26:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-201400
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:37:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:36:13 +0000   Mon, 28 Oct 2024 11:26:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:36:13 +0000   Mon, 28 Oct 2024 11:26:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:36:13 +0000   Mon, 28 Oct 2024 11:26:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:36:13 +0000   Mon, 28 Oct 2024 11:27:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.248.250
	  Hostname:    ha-201400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d9b21fc4b6e43f192a470b2b32c065c
	  System UUID:                4d027834-1578-3349-910e-6bd5fd5d19d3
	  Boot ID:                    938bfcc6-b024-401d-adf6-d844cbceb838
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gp9fd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 coredns-7c65d6cfc9-n2qnf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 coredns-7c65d6cfc9-zt6f6             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-ha-201400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-d99h6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-201400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-201400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-fg4c7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-201400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-201400                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 10m    kube-proxy       
	  Normal  Starting                 10m    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m    kubelet          Node ha-201400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m    kubelet          Node ha-201400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m    kubelet          Node ha-201400 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m    node-controller  Node ha-201400 event: Registered Node ha-201400 in Controller
	  Normal  NodeReady                9m59s  kubelet          Node ha-201400 status is now: NodeReady
	  Normal  RegisteredNode           6m22s  node-controller  Node ha-201400 event: Registered Node ha-201400 in Controller
	  Normal  RegisteredNode           2m10s  node-controller  Node ha-201400 event: Registered Node ha-201400 in Controller
	
	
	Name:               ha-201400-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-201400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-201400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_30_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:30:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-201400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:37:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:36:09 +0000   Mon, 28 Oct 2024 11:30:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:36:09 +0000   Mon, 28 Oct 2024 11:30:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:36:09 +0000   Mon, 28 Oct 2024 11:30:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:36:09 +0000   Mon, 28 Oct 2024 11:30:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.250.174
	  Hostname:    ha-201400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 67e52766d1694419a06b7897d409cd07
	  System UUID:                2f914c11-708e-3647-87e1-cddb2789e410
	  Boot ID:                    bcae8fc4-a9dd-40fa-9483-4c4c8a12d2e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cvthb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 etcd-ha-201400-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m30s
	  kube-system                 kindnet-cwkwx                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m34s
	  kube-system                 kube-apiserver-ha-201400-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-controller-manager-ha-201400-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-proxy-hkdzx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-scheduler-ha-201400-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-vip-ha-201400-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m26s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m34s (x8 over 6m34s)  kubelet          Node ha-201400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s (x8 over 6m34s)  kubelet          Node ha-201400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s (x7 over 6m34s)  kubelet          Node ha-201400-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     6m33s                  cidrAllocator    Node ha-201400-m02 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           6m32s                  node-controller  Node ha-201400-m02 event: Registered Node ha-201400-m02 in Controller
	  Normal  RegisteredNode           6m22s                  node-controller  Node ha-201400-m02 event: Registered Node ha-201400-m02 in Controller
	  Normal  RegisteredNode           2m10s                  node-controller  Node ha-201400-m02 event: Registered Node ha-201400-m02 in Controller
	
	
	Name:               ha-201400-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-201400-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-201400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_34_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:34:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-201400-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:37:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:36:14 +0000   Mon, 28 Oct 2024 11:34:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:36:14 +0000   Mon, 28 Oct 2024 11:34:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:36:14 +0000   Mon, 28 Oct 2024 11:34:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:36:14 +0000   Mon, 28 Oct 2024 11:35:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.254.230
	  Hostname:    ha-201400-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 9cbf09a9edff42f8a4b57c2e4fd514f4
	  System UUID:                d9636759-aa61-cc45-ad61-dd9dce51708f
	  Boot ID:                    4b48913b-bf4e-45b2-91b5-e33dd68d8730
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-b84wl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 etcd-ha-201400-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m20s
	  kube-system                 kindnet-5xvlb                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m24s
	  kube-system                 kube-apiserver-ha-201400-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-controller-manager-ha-201400-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-rn4tk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-ha-201400-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-vip-ha-201400-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m17s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     2m24s                  cidrAllocator    Node ha-201400-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  2m24s (x8 over 2m24s)  kubelet          Node ha-201400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m24s (x8 over 2m24s)  kubelet          Node ha-201400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m24s (x7 over 2m24s)  kubelet          Node ha-201400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m23s                  node-controller  Node ha-201400-m03 event: Registered Node ha-201400-m03 in Controller
	  Normal  RegisteredNode           2m22s                  node-controller  Node ha-201400-m03 event: Registered Node ha-201400-m03 in Controller
	  Normal  RegisteredNode           2m11s                  node-controller  Node ha-201400-m03 event: Registered Node ha-201400-m03 in Controller
	
	
	==> dmesg <==
	[  +1.968020] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.248029] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct28 11:25] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.189155] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[Oct28 11:26] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.140755] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.581345] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +0.216742] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[  +0.231896] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +2.928180] systemd-fstab-generator[1282]: Ignoring "noauto" option for root device
	[  +0.223278] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.194653] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.283776] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[ +12.200504] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +0.111987] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.173282] systemd-fstab-generator[1678]: Ignoring "noauto" option for root device
	[  +6.810485] systemd-fstab-generator[1829]: Ignoring "noauto" option for root device
	[  +0.113552] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.887892] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.541446] systemd-fstab-generator[2324]: Ignoring "noauto" option for root device
	[  +5.371714] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.727467] kauditd_printk_skb: 29 callbacks suppressed
	[Oct28 11:30] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [c2bfb2f1e651] <==
	{"level":"info","ts":"2024-10-28T11:34:46.091967Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c1e4c284ed7b9261","remote-peer-id":"e0baf44d3bca421a"}
	{"level":"info","ts":"2024-10-28T11:34:46.205394Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c1e4c284ed7b9261","to":"e0baf44d3bca421a","stream-type":"stream Message"}
	{"level":"info","ts":"2024-10-28T11:34:46.205462Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"c1e4c284ed7b9261","remote-peer-id":"e0baf44d3bca421a"}
	{"level":"info","ts":"2024-10-28T11:34:46.585694Z","caller":"traceutil/trace.go:171","msg":"trace[2112783194] linearizableReadLoop","detail":"{readStateIndex:1696; appliedIndex:1696; }","duration":"106.206204ms","start":"2024-10-28T11:34:46.479464Z","end":"2024-10-28T11:34:46.585670Z","steps":["trace[2112783194] 'read index received'  (duration: 106.198604ms)","trace[2112783194] 'applied index is now lower than readState.Index'  (duration: 6.5µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:34:46.586355Z","caller":"traceutil/trace.go:171","msg":"trace[122615777] transaction","detail":"{read_only:false; response_revision:1514; number_of_response:1; }","duration":"106.513605ms","start":"2024-10-28T11:34:46.479828Z","end":"2024-10-28T11:34:46.586341Z","steps":["trace[122615777] 'process raft request'  (duration: 106.253704ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:34:46.586715Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.233709ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-201400-m03\" ","response":"range_response_count:1 size:4368"}
	{"level":"info","ts":"2024-10-28T11:34:46.587220Z","caller":"traceutil/trace.go:171","msg":"trace[310615738] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-201400-m03; range_end:; response_count:1; response_revision:1514; }","duration":"107.747711ms","start":"2024-10-28T11:34:46.479462Z","end":"2024-10-28T11:34:46.587209Z","steps":["trace[310615738] 'agreement among raft nodes before linearized reading'  (duration: 107.122808ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:34:46.825196Z","caller":"traceutil/trace.go:171","msg":"trace[773396409] transaction","detail":"{read_only:false; response_revision:1516; number_of_response:1; }","duration":"228.845085ms","start":"2024-10-28T11:34:46.596327Z","end":"2024-10-28T11:34:46.825173Z","steps":["trace[773396409] 'process raft request'  (duration: 228.770285ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:34:46.825321Z","caller":"traceutil/trace.go:171","msg":"trace[1928527893] transaction","detail":"{read_only:false; response_revision:1515; number_of_response:1; }","duration":"341.348719ms","start":"2024-10-28T11:34:46.483958Z","end":"2024-10-28T11:34:46.825307Z","steps":["trace[1928527893] 'process raft request'  (duration: 282.63024ms)","trace[1928527893] 'compare'  (duration: 58.320877ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T11:34:46.826345Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T11:34:46.483946Z","time spent":"342.277923ms","remote":"127.0.0.1:44666","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":419,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:1507 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:369 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >"}
	{"level":"warn","ts":"2024-10-28T11:34:46.963089Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"e0baf44d3bca421a","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-10-28T11:34:47.962227Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"e0baf44d3bca421a","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-10-28T11:34:48.999749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c1e4c284ed7b9261 switched to configuration voters=(13971505820185891425 14851934453769997657 16193524022716809754)"}
	{"level":"info","ts":"2024-10-28T11:34:49.000097Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"2b520c82fdd8381","local-member-id":"c1e4c284ed7b9261"}
	{"level":"info","ts":"2024-10-28T11:34:49.000308Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"c1e4c284ed7b9261","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"e0baf44d3bca421a"}
	{"level":"info","ts":"2024-10-28T11:34:54.229168Z","caller":"traceutil/trace.go:171","msg":"trace[118440275] linearizableReadLoop","detail":"{readStateIndex:1765; appliedIndex:1765; }","duration":"174.395727ms","start":"2024-10-28T11:34:54.054754Z","end":"2024-10-28T11:34:54.229150Z","steps":["trace[118440275] 'read index received'  (duration: 174.390827ms)","trace[118440275] 'applied index is now lower than readState.Index'  (duration: 3.6µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:34:54.229693Z","caller":"traceutil/trace.go:171","msg":"trace[1556403992] transaction","detail":"{read_only:false; response_revision:1574; number_of_response:1; }","duration":"184.870376ms","start":"2024-10-28T11:34:54.044810Z","end":"2024-10-28T11:34:54.229681Z","steps":["trace[1556403992] 'process raft request'  (duration: 184.455974ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:34:54.229826Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.979229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-201400-m03\" ","response":"range_response_count:1 size:4444"}
	{"level":"info","ts":"2024-10-28T11:34:54.230210Z","caller":"traceutil/trace.go:171","msg":"trace[1668254837] range","detail":"{range_begin:/registry/minions/ha-201400-m03; range_end:; response_count:1; response_revision:1573; }","duration":"175.248031ms","start":"2024-10-28T11:34:54.054749Z","end":"2024-10-28T11:34:54.229997Z","steps":["trace[1668254837] 'agreement among raft nodes before linearized reading'  (duration: 174.808029ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:34:55.453150Z","caller":"traceutil/trace.go:171","msg":"trace[1879516832] transaction","detail":"{read_only:false; response_revision:1579; number_of_response:1; }","duration":"204.262368ms","start":"2024-10-28T11:34:55.248828Z","end":"2024-10-28T11:34:55.453090Z","steps":["trace[1879516832] 'process raft request'  (duration: 204.102568ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:34:56.304405Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.123626ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:34:56.304772Z","caller":"traceutil/trace.go:171","msg":"trace[1997109901] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1586; }","duration":"132.502127ms","start":"2024-10-28T11:34:56.172255Z","end":"2024-10-28T11:34:56.304758Z","steps":["trace[1997109901] 'range keys from in-memory index tree'  (duration: 132.111926ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:36:33.251454Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1043}
	{"level":"info","ts":"2024-10-28T11:36:33.360354Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1043,"took":"105.02805ms","hash":2927673269,"current-db-size-bytes":3637248,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":2097152,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-10-28T11:36:33.360548Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2927673269,"revision":1043,"compact-revision":-1}
	
	
	==> kernel <==
	 11:37:06 up 12 min,  0 users,  load average: 0.68, 0.81, 0.45
	Linux ha-201400 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7f47c99a1a2a] <==
	I1028 11:36:24.714208       1 main.go:323] Node ha-201400-m03 has CIDR [10.244.3.0/24] 
	I1028 11:36:34.710381       1 main.go:296] Handling node with IPs: map[172.27.248.250:{}]
	I1028 11:36:34.710421       1 main.go:300] handling current node
	I1028 11:36:34.710440       1 main.go:296] Handling node with IPs: map[172.27.250.174:{}]
	I1028 11:36:34.710448       1 main.go:323] Node ha-201400-m02 has CIDR [10.244.1.0/24] 
	I1028 11:36:34.711372       1 main.go:296] Handling node with IPs: map[172.27.254.230:{}]
	I1028 11:36:34.711483       1 main.go:323] Node ha-201400-m03 has CIDR [10.244.3.0/24] 
	I1028 11:36:44.704363       1 main.go:296] Handling node with IPs: map[172.27.254.230:{}]
	I1028 11:36:44.704691       1 main.go:323] Node ha-201400-m03 has CIDR [10.244.3.0/24] 
	I1028 11:36:44.705388       1 main.go:296] Handling node with IPs: map[172.27.248.250:{}]
	I1028 11:36:44.705481       1 main.go:300] handling current node
	I1028 11:36:44.705500       1 main.go:296] Handling node with IPs: map[172.27.250.174:{}]
	I1028 11:36:44.705508       1 main.go:323] Node ha-201400-m02 has CIDR [10.244.1.0/24] 
	I1028 11:36:54.704554       1 main.go:296] Handling node with IPs: map[172.27.254.230:{}]
	I1028 11:36:54.704754       1 main.go:323] Node ha-201400-m03 has CIDR [10.244.3.0/24] 
	I1028 11:36:54.705359       1 main.go:296] Handling node with IPs: map[172.27.248.250:{}]
	I1028 11:36:54.705396       1 main.go:300] handling current node
	I1028 11:36:54.705413       1 main.go:296] Handling node with IPs: map[172.27.250.174:{}]
	I1028 11:36:54.705420       1 main.go:323] Node ha-201400-m02 has CIDR [10.244.1.0/24] 
	I1028 11:37:04.709422       1 main.go:296] Handling node with IPs: map[172.27.248.250:{}]
	I1028 11:37:04.709463       1 main.go:300] handling current node
	I1028 11:37:04.709482       1 main.go:296] Handling node with IPs: map[172.27.250.174:{}]
	I1028 11:37:04.709490       1 main.go:323] Node ha-201400-m02 has CIDR [10.244.1.0/24] 
	I1028 11:37:04.710096       1 main.go:296] Handling node with IPs: map[172.27.254.230:{}]
	I1028 11:37:04.710115       1 main.go:323] Node ha-201400-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [fa49f1d4e69a] <==
	I1028 11:26:39.820016       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 11:26:39.858551       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1028 11:26:39.913705       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 11:26:43.735026       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1028 11:26:44.335558       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1028 11:34:43.221693       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 10.5µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1028 11:34:43.222136       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1028 11:34:43.301320       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1028 11:34:43.317282       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1028 11:34:43.352052       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="111.66733ms" method="PATCH" path="/api/v1/namespaces/default/events/ha-201400-m03.18029aaec6f3a61c" result=null
	E1028 11:36:04.232222       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58141: use of closed network connection
	E1028 11:36:04.810503       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58143: use of closed network connection
	E1028 11:36:06.638438       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58145: use of closed network connection
	E1028 11:36:07.284460       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58147: use of closed network connection
	E1028 11:36:07.886130       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58149: use of closed network connection
	E1028 11:36:08.476464       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58151: use of closed network connection
	E1028 11:36:09.041023       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58153: use of closed network connection
	E1028 11:36:09.619643       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58155: use of closed network connection
	E1028 11:36:10.190242       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58157: use of closed network connection
	E1028 11:36:11.240344       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58160: use of closed network connection
	E1028 11:36:21.803256       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58162: use of closed network connection
	E1028 11:36:22.384039       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58165: use of closed network connection
	E1028 11:36:32.943672       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58167: use of closed network connection
	E1028 11:36:33.500830       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58170: use of closed network connection
	E1028 11:36:44.047431       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58172: use of closed network connection
	
	
	==> kube-controller-manager [d70ee194fe7f] <==
	I1028 11:34:55.882845       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m03"
	I1028 11:34:55.966156       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m03"
	I1028 11:35:10.673264       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m03"
	I1028 11:35:10.709304       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m03"
	I1028 11:35:10.926354       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m03"
	I1028 11:35:13.155541       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m03"
	I1028 11:35:56.952270       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="166.838747ms"
	I1028 11:35:57.114115       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="160.864948ms"
	I1028 11:35:57.427435       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="313.248457ms"
	I1028 11:35:57.605810       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="177.848314ms"
	I1028 11:35:57.675549       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="69.603202ms"
	I1028 11:35:57.678186       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.801µs"
	I1028 11:35:57.796526       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.872941ms"
	I1028 11:35:57.797676       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="302.305µs"
	I1028 11:35:57.993629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.321881ms"
	I1028 11:35:57.996173       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="415.206µs"
	I1028 11:36:00.399245       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="96.852293ms"
	I1028 11:36:00.399370       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.9µs"
	I1028 11:36:00.458917       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="23.083261ms"
	I1028 11:36:00.459002       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.4µs"
	I1028 11:36:01.365770       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.30492ms"
	I1028 11:36:01.368348       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="89.101µs"
	I1028 11:36:09.929539       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m02"
	I1028 11:36:13.576765       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400"
	I1028 11:36:14.334396       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m03"
	
	
	==> kube-proxy [9f51b5bae869] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 11:26:45.681723       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 11:26:45.724318       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.27.248.250"]
	E1028 11:26:45.724406       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:26:45.800088       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 11:26:45.800155       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 11:26:45.800204       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:26:45.804211       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 11:26:45.804955       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:26:45.805041       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:26:45.809285       1 config.go:199] "Starting service config controller"
	I1028 11:26:45.809516       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:26:45.809739       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:26:45.810087       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:26:45.812617       1 config.go:328] "Starting node config controller"
	I1028 11:26:45.812756       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:26:45.910473       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 11:26:45.910541       1 shared_informer.go:320] Caches are synced for service config
	I1028 11:26:45.913305       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [afe94cc393c2] <==
	E1028 11:26:36.794820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:36.845101       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 11:26:36.845167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:36.872928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 11:26:36.873157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:36.965316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 11:26:36.965385       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:37.026414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 11:26:37.026730       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:37.083538       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 11:26:37.083671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:37.117377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 11:26:37.117836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:37.148593       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 11:26:37.149503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:37.208715       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 11:26:37.209200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:37.343368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 11:26:37.344275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:37.393068       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 11:26:37.393338       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1028 11:26:39.903930       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1028 11:35:56.901211       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 946938ec-9c81-4b74-88bb-1468a578aa88(default/busybox-7dff88458-cvthb) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-cvthb"
	E1028 11:35:56.908277       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 946938ec-9c81-4b74-88bb-1468a578aa88(default/busybox-7dff88458-cvthb) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-cvthb"
	I1028 11:35:56.909400       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-cvthb" node="ha-201400-m02"
	
	
	==> kubelet <==
	Oct 28 11:32:39 ha-201400 kubelet[2332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:33:39 ha-201400 kubelet[2332]: E1028 11:33:39.988545    2332 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:33:39 ha-201400 kubelet[2332]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:33:39 ha-201400 kubelet[2332]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:33:39 ha-201400 kubelet[2332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:33:39 ha-201400 kubelet[2332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:34:39 ha-201400 kubelet[2332]: E1028 11:34:39.985481    2332 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:34:39 ha-201400 kubelet[2332]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:34:39 ha-201400 kubelet[2332]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:34:39 ha-201400 kubelet[2332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:34:39 ha-201400 kubelet[2332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:35:39 ha-201400 kubelet[2332]: E1028 11:35:39.984111    2332 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:35:39 ha-201400 kubelet[2332]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:35:39 ha-201400 kubelet[2332]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:35:39 ha-201400 kubelet[2332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:35:39 ha-201400 kubelet[2332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:35:56 ha-201400 kubelet[2332]: I1028 11:35:56.987771    2332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zt6f6" podStartSLOduration=552.987075744 podStartE2EDuration="9m12.987075744s" podCreationTimestamp="2024-10-28 11:26:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-28 11:27:09.428361358 +0000 UTC m=+29.716751477" watchObservedRunningTime="2024-10-28 11:35:56.987075744 +0000 UTC m=+557.275465863"
	Oct 28 11:35:57 ha-201400 kubelet[2332]: I1028 11:35:57.146663    2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vr7d\" (UniqueName: \"kubernetes.io/projected/fcaeea45-7e03-4f0d-b720-af44340cc9c9-kube-api-access-4vr7d\") pod \"busybox-7dff88458-gp9fd\" (UID: \"fcaeea45-7e03-4f0d-b720-af44340cc9c9\") " pod="default/busybox-7dff88458-gp9fd"
	Oct 28 11:35:58 ha-201400 kubelet[2332]: I1028 11:35:58.361902    2332 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b312bd0d3e3e3de763a1951c21f9ab365e129d50fb50ed7e88db6c55a29fffb"
	Oct 28 11:36:00 ha-201400 kubelet[2332]: I1028 11:36:00.434617    2332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-gp9fd" podStartSLOduration=3.026619291 podStartE2EDuration="4.43453907s" podCreationTimestamp="2024-10-28 11:35:56 +0000 UTC" firstStartedPulling="2024-10-28 11:35:58.428702344 +0000 UTC m=+558.717092363" lastFinishedPulling="2024-10-28 11:35:59.836622123 +0000 UTC m=+560.125012142" observedRunningTime="2024-10-28 11:36:00.433218555 +0000 UTC m=+560.721608674" watchObservedRunningTime="2024-10-28 11:36:00.43453907 +0000 UTC m=+560.722929189"
	Oct 28 11:36:39 ha-201400 kubelet[2332]: E1028 11:36:39.984587    2332 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:36:39 ha-201400 kubelet[2332]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:36:39 ha-201400 kubelet[2332]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:36:39 ha-201400 kubelet[2332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:36:39 ha-201400 kubelet[2332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-201400 -n ha-201400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-201400 -n ha-201400: (13.0998644s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-201400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (71.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (670.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 status --output json -v=7 --alsologtostderr: (50.8825059s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp testdata\cp-test.txt ha-201400:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp testdata\cp-test.txt ha-201400:/home/docker/cp-test.txt: (10.2190115s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400 "sudo cat /home/docker/cp-test.txt": (10.0453561s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile187888037\001\cp-test_ha-201400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile187888037\001\cp-test_ha-201400.txt: (10.0797193s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400 "sudo cat /home/docker/cp-test.txt": (10.1572786s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400:/home/docker/cp-test.txt ha-201400-m02:/home/docker/cp-test_ha-201400_ha-201400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400:/home/docker/cp-test.txt ha-201400-m02:/home/docker/cp-test_ha-201400_ha-201400-m02.txt: (17.5040989s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400 "sudo cat /home/docker/cp-test.txt"
E1028 11:44:45.539463    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400 "sudo cat /home/docker/cp-test.txt": (10.0070804s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m02 "sudo cat /home/docker/cp-test_ha-201400_ha-201400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m02 "sudo cat /home/docker/cp-test_ha-201400_ha-201400-m02.txt": (10.0084123s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400:/home/docker/cp-test.txt ha-201400-m03:/home/docker/cp-test_ha-201400_ha-201400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400:/home/docker/cp-test.txt ha-201400-m03:/home/docker/cp-test_ha-201400_ha-201400-m03.txt: (17.6479388s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400 "sudo cat /home/docker/cp-test.txt": (10.1406392s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 "sudo cat /home/docker/cp-test_ha-201400_ha-201400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 "sudo cat /home/docker/cp-test_ha-201400_ha-201400-m03.txt": (10.1010144s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400:/home/docker/cp-test.txt ha-201400-m04:/home/docker/cp-test_ha-201400_ha-201400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400:/home/docker/cp-test.txt ha-201400-m04:/home/docker/cp-test_ha-201400_ha-201400-m04.txt: (17.5637984s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400 "sudo cat /home/docker/cp-test.txt": (9.9699689s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 "sudo cat /home/docker/cp-test_ha-201400_ha-201400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 "sudo cat /home/docker/cp-test_ha-201400_ha-201400-m04.txt": (9.9334477s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp testdata\cp-test.txt ha-201400-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp testdata\cp-test.txt ha-201400-m02:/home/docker/cp-test.txt: (10.0039594s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m02 "sudo cat /home/docker/cp-test.txt": (10.1028779s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile187888037\001\cp-test_ha-201400-m02.txt
E1028 11:46:39.695453    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile187888037\001\cp-test_ha-201400-m02.txt: (9.9777658s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m02 "sudo cat /home/docker/cp-test.txt": (9.9470556s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m02:/home/docker/cp-test.txt ha-201400:/home/docker/cp-test_ha-201400-m02_ha-201400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m02:/home/docker/cp-test.txt ha-201400:/home/docker/cp-test_ha-201400-m02_ha-201400.txt: (17.4739146s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m02 "sudo cat /home/docker/cp-test.txt": (10.0306817s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400 "sudo cat /home/docker/cp-test_ha-201400-m02_ha-201400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400 "sudo cat /home/docker/cp-test_ha-201400-m02_ha-201400.txt": (10.048486s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m02:/home/docker/cp-test.txt ha-201400-m03:/home/docker/cp-test_ha-201400-m02_ha-201400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m02:/home/docker/cp-test.txt ha-201400-m03:/home/docker/cp-test_ha-201400-m02_ha-201400-m03.txt: (17.2943488s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m02 "sudo cat /home/docker/cp-test.txt": (9.9171863s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 "sudo cat /home/docker/cp-test_ha-201400-m02_ha-201400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 "sudo cat /home/docker/cp-test_ha-201400-m02_ha-201400-m03.txt": (9.9035671s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m02:/home/docker/cp-test.txt ha-201400-m04:/home/docker/cp-test_ha-201400-m02_ha-201400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m02:/home/docker/cp-test.txt ha-201400-m04:/home/docker/cp-test_ha-201400-m02_ha-201400-m04.txt: (17.3306348s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m02 "sudo cat /home/docker/cp-test.txt": (9.9856198s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 "sudo cat /home/docker/cp-test_ha-201400-m02_ha-201400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 "sudo cat /home/docker/cp-test_ha-201400-m02_ha-201400-m04.txt": (9.9592643s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp testdata\cp-test.txt ha-201400-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp testdata\cp-test.txt ha-201400-m03:/home/docker/cp-test.txt: (9.972881s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 "sudo cat /home/docker/cp-test.txt": (10.018945s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile187888037\001\cp-test_ha-201400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile187888037\001\cp-test_ha-201400-m03.txt: (9.9575622s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 "sudo cat /home/docker/cp-test.txt": (10.0599216s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m03:/home/docker/cp-test.txt ha-201400:/home/docker/cp-test_ha-201400-m03_ha-201400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m03:/home/docker/cp-test.txt ha-201400:/home/docker/cp-test_ha-201400-m03_ha-201400.txt: (17.2866153s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 "sudo cat /home/docker/cp-test.txt"
E1028 11:49:45.542188    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 "sudo cat /home/docker/cp-test.txt": (9.8804029s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400 "sudo cat /home/docker/cp-test_ha-201400-m03_ha-201400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400 "sudo cat /home/docker/cp-test_ha-201400-m03_ha-201400.txt": (9.8376582s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m03:/home/docker/cp-test.txt ha-201400-m02:/home/docker/cp-test_ha-201400-m03_ha-201400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m03:/home/docker/cp-test.txt ha-201400-m02:/home/docker/cp-test_ha-201400-m03_ha-201400-m02.txt: (17.2314227s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 "sudo cat /home/docker/cp-test.txt": (10.0566549s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m02 "sudo cat /home/docker/cp-test_ha-201400-m03_ha-201400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m02 "sudo cat /home/docker/cp-test_ha-201400-m03_ha-201400-m02.txt": (10.0440796s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m03:/home/docker/cp-test.txt ha-201400-m04:/home/docker/cp-test_ha-201400-m03_ha-201400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m03:/home/docker/cp-test.txt ha-201400-m04:/home/docker/cp-test_ha-201400-m03_ha-201400-m04.txt: (17.4014908s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 "sudo cat /home/docker/cp-test.txt": (9.8770367s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 "sudo cat /home/docker/cp-test_ha-201400-m03_ha-201400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 "sudo cat /home/docker/cp-test_ha-201400-m03_ha-201400-m04.txt": (9.8738941s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp testdata\cp-test.txt ha-201400-m04:/home/docker/cp-test.txt
E1028 11:51:22.775072    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp testdata\cp-test.txt ha-201400-m04:/home/docker/cp-test.txt: (9.983761s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 "sudo cat /home/docker/cp-test.txt": (9.9582824s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile187888037\001\cp-test_ha-201400-m04.txt
E1028 11:51:39.697143    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile187888037\001\cp-test_ha-201400-m04.txt: (9.8825324s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 "sudo cat /home/docker/cp-test.txt": (9.8815626s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m04:/home/docker/cp-test.txt ha-201400:/home/docker/cp-test_ha-201400-m04_ha-201400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m04:/home/docker/cp-test.txt ha-201400:/home/docker/cp-test_ha-201400-m04_ha-201400.txt: (17.2267902s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 "sudo cat /home/docker/cp-test.txt": (9.9890814s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400 "sudo cat /home/docker/cp-test_ha-201400-m04_ha-201400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400 "sudo cat /home/docker/cp-test_ha-201400-m04_ha-201400.txt": (10.078017s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m04:/home/docker/cp-test.txt ha-201400-m02:/home/docker/cp-test_ha-201400-m04_ha-201400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m04:/home/docker/cp-test.txt ha-201400-m02:/home/docker/cp-test_ha-201400-m04_ha-201400-m02.txt: (17.3835155s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 "sudo cat /home/docker/cp-test.txt": (9.9899283s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m02 "sudo cat /home/docker/cp-test_ha-201400-m04_ha-201400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m02 "sudo cat /home/docker/cp-test_ha-201400-m04_ha-201400-m02.txt": (9.9248546s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m04:/home/docker/cp-test.txt ha-201400-m03:/home/docker/cp-test_ha-201400-m04_ha-201400-m03.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m04:/home/docker/cp-test.txt ha-201400-m03:/home/docker/cp-test_ha-201400-m04_ha-201400-m03.txt: exit status 1 (11.893305s)
helpers_test.go:558: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m04:/home/docker/cp-test.txt ha-201400-m03:/home/docker/cp-test_ha-201400-m04_ha-201400-m03.txt
helpers_test.go:561: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-201400 cp ha-201400-m04:/home/docker/cp-test.txt ha-201400-m03:/home/docker/cp-test_ha-201400-m04_ha-201400-m03.txt" : exit status 1
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 "sudo cat /home/docker/cp-test.txt": context deadline exceeded (102.7µs)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m04 \"sudo cat /home/docker/cp-test.txt\"" : context deadline exceeded
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 "sudo cat /home/docker/cp-test_ha-201400-m04_ha-201400-m03.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 "sudo cat /home/docker/cp-test_ha-201400-m04_ha-201400-m03.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 "sudo cat /home/docker/cp-test_ha-201400-m04_ha-201400-m03.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-201400 ssh -n ha-201400-m03 \"sudo cat /home/docker/cp-test_ha-201400-m04_ha-201400-m03.txt\"" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-201400 -n ha-201400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-201400 -n ha-201400: (12.7997245s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 logs -n 25: (9.3963512s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| ssh     | ha-201400 ssh -n ha-201400-m04 sudo cat                                                                                  | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:48 UTC | 28 Oct 24 11:48 UTC |
	|         | /home/docker/cp-test_ha-201400-m02_ha-201400-m04.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-201400 cp testdata\cp-test.txt                                                                                        | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:48 UTC | 28 Oct 24 11:48 UTC |
	|         | ha-201400-m03:/home/docker/cp-test.txt                                                                                   |           |                   |         |                     |                     |
	| ssh     | ha-201400 ssh -n                                                                                                         | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:48 UTC | 28 Oct 24 11:49 UTC |
	|         | ha-201400-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-201400 cp ha-201400-m03:/home/docker/cp-test.txt                                                                      | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:49 UTC | 28 Oct 24 11:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile187888037\001\cp-test_ha-201400-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-201400 ssh -n                                                                                                         | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:49 UTC | 28 Oct 24 11:49 UTC |
	|         | ha-201400-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-201400 cp ha-201400-m03:/home/docker/cp-test.txt                                                                      | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:49 UTC | 28 Oct 24 11:49 UTC |
	|         | ha-201400:/home/docker/cp-test_ha-201400-m03_ha-201400.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-201400 ssh -n                                                                                                         | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:49 UTC | 28 Oct 24 11:49 UTC |
	|         | ha-201400-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-201400 ssh -n ha-201400 sudo cat                                                                                      | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:49 UTC | 28 Oct 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-201400-m03_ha-201400.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-201400 cp ha-201400-m03:/home/docker/cp-test.txt                                                                      | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:50 UTC | 28 Oct 24 11:50 UTC |
	|         | ha-201400-m02:/home/docker/cp-test_ha-201400-m03_ha-201400-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-201400 ssh -n                                                                                                         | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:50 UTC | 28 Oct 24 11:50 UTC |
	|         | ha-201400-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-201400 ssh -n ha-201400-m02 sudo cat                                                                                  | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:50 UTC | 28 Oct 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-201400-m03_ha-201400-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-201400 cp ha-201400-m03:/home/docker/cp-test.txt                                                                      | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:50 UTC | 28 Oct 24 11:50 UTC |
	|         | ha-201400-m04:/home/docker/cp-test_ha-201400-m03_ha-201400-m04.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-201400 ssh -n                                                                                                         | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:50 UTC | 28 Oct 24 11:51 UTC |
	|         | ha-201400-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-201400 ssh -n ha-201400-m04 sudo cat                                                                                  | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:51 UTC | 28 Oct 24 11:51 UTC |
	|         | /home/docker/cp-test_ha-201400-m03_ha-201400-m04.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-201400 cp testdata\cp-test.txt                                                                                        | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:51 UTC | 28 Oct 24 11:51 UTC |
	|         | ha-201400-m04:/home/docker/cp-test.txt                                                                                   |           |                   |         |                     |                     |
	| ssh     | ha-201400 ssh -n                                                                                                         | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:51 UTC | 28 Oct 24 11:51 UTC |
	|         | ha-201400-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-201400 cp ha-201400-m04:/home/docker/cp-test.txt                                                                      | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:51 UTC | 28 Oct 24 11:51 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile187888037\001\cp-test_ha-201400-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-201400 ssh -n                                                                                                         | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:51 UTC | 28 Oct 24 11:51 UTC |
	|         | ha-201400-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-201400 cp ha-201400-m04:/home/docker/cp-test.txt                                                                      | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:51 UTC | 28 Oct 24 11:52 UTC |
	|         | ha-201400:/home/docker/cp-test_ha-201400-m04_ha-201400.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-201400 ssh -n                                                                                                         | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:52 UTC | 28 Oct 24 11:52 UTC |
	|         | ha-201400-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-201400 ssh -n ha-201400 sudo cat                                                                                      | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:52 UTC | 28 Oct 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-201400-m04_ha-201400.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-201400 cp ha-201400-m04:/home/docker/cp-test.txt                                                                      | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:52 UTC | 28 Oct 24 11:52 UTC |
	|         | ha-201400-m02:/home/docker/cp-test_ha-201400-m04_ha-201400-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-201400 ssh -n                                                                                                         | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:52 UTC | 28 Oct 24 11:53 UTC |
	|         | ha-201400-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-201400 ssh -n ha-201400-m02 sudo cat                                                                                  | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:53 UTC | 28 Oct 24 11:53 UTC |
	|         | /home/docker/cp-test_ha-201400-m04_ha-201400-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-201400 cp ha-201400-m04:/home/docker/cp-test.txt                                                                      | ha-201400 | minikube6\jenkins | v1.34.0 | 28 Oct 24 11:53 UTC |                     |
	|         | ha-201400-m03:/home/docker/cp-test_ha-201400-m04_ha-201400-m03.txt                                                       |           |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:23:24
	Running on machine: minikube6
	Binary: Built with gc go1.23.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:23:23.945177    3404 out.go:345] Setting OutFile to fd 1420 ...
	I1028 11:23:24.025125    3404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:23:24.025125    3404 out.go:358] Setting ErrFile to fd 1632...
	I1028 11:23:24.025125    3404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:23:24.053744    3404 out.go:352] Setting JSON to false
	I1028 11:23:24.056741    3404 start.go:129] hostinfo: {"hostname":"minikube6","uptime":162429,"bootTime":1729952174,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W1028 11:23:24.056741    3404 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 11:23:24.065808    3404 out.go:177] * [ha-201400] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1028 11:23:24.072065    3404 notify.go:220] Checking for updates...
	I1028 11:23:24.074260    3404 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 11:23:24.079186    3404 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:23:24.082394    3404 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I1028 11:23:24.084903    3404 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:23:24.087428    3404 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:23:24.091365    3404 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:23:29.788175    3404 out.go:177] * Using the hyperv driver based on user configuration
	I1028 11:23:29.792145    3404 start.go:297] selected driver: hyperv
	I1028 11:23:29.792183    3404 start.go:901] validating driver "hyperv" against <nil>
	I1028 11:23:29.792264    3404 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:23:29.844191    3404 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 11:23:29.846138    3404 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:23:29.846138    3404 cni.go:84] Creating CNI manager for ""
	I1028 11:23:29.846138    3404 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 11:23:29.846138    3404 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 11:23:29.846138    3404 start.go:340] cluster config:
	{Name:ha-201400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-201400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:23:29.846138    3404 iso.go:125] acquiring lock: {Name:mk92685f18db3b9b8a4c28d2a00efac35439147d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:23:29.850377    3404 out.go:177] * Starting "ha-201400" primary control-plane node in "ha-201400" cluster
	I1028 11:23:29.853474    3404 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 11:23:29.853474    3404 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1028 11:23:29.854170    3404 cache.go:56] Caching tarball of preloaded images
	I1028 11:23:29.854314    3404 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1028 11:23:29.854314    3404 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 11:23:29.854967    3404 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json ...
	I1028 11:23:29.855532    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json: {Name:mkec662da8c9b8a5bcca6963febe40e58918464d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:23:29.855760    3404 start.go:360] acquireMachinesLock for ha-201400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:23:29.856765    3404 start.go:364] duration metric: took 1.0052ms to acquireMachinesLock for "ha-201400"
	I1028 11:23:29.856765    3404 start.go:93] Provisioning new machine with config: &{Name:ha-201400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.2 ClusterName:ha-201400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:23:29.856765    3404 start.go:125] createHost starting for "" (driver="hyperv")
	I1028 11:23:29.858999    3404 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:23:29.860001    3404 start.go:159] libmachine.API.Create for "ha-201400" (driver="hyperv")
	I1028 11:23:29.860001    3404 client.go:168] LocalClient.Create starting
	I1028 11:23:29.860001    3404 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I1028 11:23:29.860769    3404 main.go:141] libmachine: Decoding PEM data...
	I1028 11:23:29.860769    3404 main.go:141] libmachine: Parsing certificate...
	I1028 11:23:29.860988    3404 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I1028 11:23:29.861141    3404 main.go:141] libmachine: Decoding PEM data...
	I1028 11:23:29.861141    3404 main.go:141] libmachine: Parsing certificate...
	I1028 11:23:29.861141    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1028 11:23:32.050169    3404 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1028 11:23:32.050169    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:32.050288    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1028 11:23:33.890385    3404 main.go:141] libmachine: [stdout =====>] : False
	
	I1028 11:23:33.890385    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:33.890667    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1028 11:23:35.487087    3404 main.go:141] libmachine: [stdout =====>] : True
	
	I1028 11:23:35.487087    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:35.487087    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1028 11:23:39.390943    3404 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1028 11:23:39.390943    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:39.394704    3404 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:23:39.909693    3404 main.go:141] libmachine: Creating SSH key...
	I1028 11:23:40.215866    3404 main.go:141] libmachine: Creating VM...
	I1028 11:23:40.215866    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1028 11:23:43.289477    3404 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1028 11:23:43.290589    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:43.290589    3404 main.go:141] libmachine: Using switch "Default Switch"
	I1028 11:23:43.290589    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1028 11:23:45.149187    3404 main.go:141] libmachine: [stdout =====>] : True
	
	I1028 11:23:45.149187    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:45.149187    3404 main.go:141] libmachine: Creating VHD
	I1028 11:23:45.150149    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\fixed.vhd' -SizeBytes 10MB -Fixed
	I1028 11:23:48.979903    3404 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F71B427F-95EF-46C6-BB3D-C741D8705557
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1028 11:23:48.980994    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:48.981047    3404 main.go:141] libmachine: Writing magic tar header
	I1028 11:23:48.981047    3404 main.go:141] libmachine: Writing SSH key tar header
	I1028 11:23:48.991865    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\disk.vhd' -VHDType Dynamic -DeleteSource
	I1028 11:23:52.232922    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:23:52.232979    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:52.232979    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\disk.vhd' -SizeBytes 20000MB
	I1028 11:23:54.858270    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:23:54.858270    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:54.858545    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-201400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1028 11:23:58.670169    3404 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-201400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1028 11:23:58.670553    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:23:58.670633    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-201400 -DynamicMemoryEnabled $false
	I1028 11:24:01.012735    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:01.012735    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:01.013644    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-201400 -Count 2
	I1028 11:24:03.328467    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:03.329221    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:03.329339    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-201400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\boot2docker.iso'
	I1028 11:24:06.018951    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:06.018951    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:06.018951    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-201400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\disk.vhd'
	I1028 11:24:08.783783    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:08.783783    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:08.783783    3404 main.go:141] libmachine: Starting VM...
	I1028 11:24:08.783783    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-201400
	I1028 11:24:11.994076    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:11.994076    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:11.994076    3404 main.go:141] libmachine: Waiting for host to start...
	I1028 11:24:11.994076    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:14.383093    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:14.383134    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:14.383218    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:24:17.020640    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:17.020640    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:18.022370    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:20.388698    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:20.388698    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:20.389384    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:24:23.040590    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:23.040590    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:24.041561    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:26.388718    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:26.388718    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:26.389800    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:24:29.030374    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:29.030591    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:30.031010    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:32.334668    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:32.335595    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:32.335595    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:24:35.022501    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:24:35.022501    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:36.023084    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:38.342495    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:38.342495    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:38.342495    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:24:41.068455    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:24:41.068455    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:41.068455    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:43.306699    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:43.307581    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:43.307687    3404 machine.go:93] provisionDockerMachine start ...
	I1028 11:24:43.307687    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:45.586564    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:45.586641    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:45.586641    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:24:48.300822    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:24:48.300892    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:48.306670    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:24:48.319711    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.248.250 22 <nil> <nil>}
	I1028 11:24:48.319711    3404 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 11:24:48.448312    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 11:24:48.448312    3404 buildroot.go:166] provisioning hostname "ha-201400"
	I1028 11:24:48.448312    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:50.695910    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:50.696477    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:50.696638    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:24:53.363606    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:24:53.363707    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:53.370925    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:24:53.371631    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.248.250 22 <nil> <nil>}
	I1028 11:24:53.371631    3404 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-201400 && echo "ha-201400" | sudo tee /etc/hostname
	I1028 11:24:53.522931    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-201400
	
	I1028 11:24:53.522931    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:24:55.735125    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:24:55.735125    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:55.735327    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:24:58.399484    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:24:58.399484    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:24:58.406211    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:24:58.406784    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.248.250 22 <nil> <nil>}
	I1028 11:24:58.406910    3404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-201400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-201400/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-201400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:24:58.542079    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:24:58.542079    3404 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I1028 11:24:58.542079    3404 buildroot.go:174] setting up certificates
	I1028 11:24:58.542079    3404 provision.go:84] configureAuth start
	I1028 11:24:58.542079    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:00.773961    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:00.773961    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:00.773961    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:03.411510    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:03.411510    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:03.411982    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:05.620663    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:05.620663    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:05.620748    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:08.267402    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:08.268418    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:08.268523    3404 provision.go:143] copyHostCerts
	I1028 11:25:08.268701    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I1028 11:25:08.269074    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I1028 11:25:08.269074    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I1028 11:25:08.269555    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I1028 11:25:08.270975    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I1028 11:25:08.271524    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I1028 11:25:08.271524    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I1028 11:25:08.271880    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1028 11:25:08.272805    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I1028 11:25:08.273095    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I1028 11:25:08.273171    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I1028 11:25:08.273427    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1028 11:25:08.274724    3404 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-201400 san=[127.0.0.1 172.27.248.250 ha-201400 localhost minikube]
	I1028 11:25:08.408133    3404 provision.go:177] copyRemoteCerts
	I1028 11:25:08.421118    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:25:08.421118    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:10.618156    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:10.618156    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:10.618272    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:13.271182    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:13.271182    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:13.271923    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:25:13.375426    3404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9541188s)
	I1028 11:25:13.375426    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1028 11:25:13.376020    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:25:13.440790    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1028 11:25:13.440966    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:25:13.491081    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1028 11:25:13.491440    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I1028 11:25:13.550743    3404 provision.go:87] duration metric: took 15.0084107s to configureAuth
	I1028 11:25:13.550743    3404 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:25:13.551430    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:25:13.551430    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:15.759031    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:15.759031    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:15.759127    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:18.394001    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:18.394001    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:18.400532    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:25:18.401259    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.248.250 22 <nil> <nil>}
	I1028 11:25:18.401259    3404 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 11:25:18.521445    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 11:25:18.521445    3404 buildroot.go:70] root file system type: tmpfs
	I1028 11:25:18.521445    3404 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 11:25:18.522064    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:20.781489    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:20.781489    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:20.781835    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:23.418009    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:23.418009    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:23.424631    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:25:23.424703    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.248.250 22 <nil> <nil>}
	I1028 11:25:23.425287    3404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 11:25:23.581281    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 11:25:23.581824    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:25.784576    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:25.784680    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:25.784747    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:28.473715    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:28.474572    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:28.480729    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:25:28.481287    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.248.250 22 <nil> <nil>}
	I1028 11:25:28.481287    3404 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 11:25:30.779863    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1028 11:25:30.779942    3404 machine.go:96] duration metric: took 47.4717185s to provisionDockerMachine
	I1028 11:25:30.780006    3404 client.go:171] duration metric: took 2m0.9185746s to LocalClient.Create
	I1028 11:25:30.780006    3404 start.go:167] duration metric: took 2m0.9186379s to libmachine.API.Create "ha-201400"
	I1028 11:25:30.780082    3404 start.go:293] postStartSetup for "ha-201400" (driver="hyperv")
	I1028 11:25:30.780082    3404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:25:30.793085    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:25:30.793085    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:33.024287    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:33.024287    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:33.024287    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:35.723875    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:35.723875    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:35.725281    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:25:35.832422    3404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0392802s)
	I1028 11:25:35.844100    3404 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:25:35.851529    3404 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:25:35.851654    3404 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I1028 11:25:35.851949    3404 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I1028 11:25:35.853091    3404 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> 96082.pem in /etc/ssl/certs
	I1028 11:25:35.853091    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /etc/ssl/certs/96082.pem
	I1028 11:25:35.865291    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:25:35.885900    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /etc/ssl/certs/96082.pem (1708 bytes)
	I1028 11:25:35.935921    3404 start.go:296] duration metric: took 5.1557809s for postStartSetup
	I1028 11:25:35.939827    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:38.203571    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:38.204124    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:38.204206    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:40.859263    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:40.859263    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:40.859263    3404 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json ...
	I1028 11:25:40.862661    3404 start.go:128] duration metric: took 2m11.0042909s to createHost
	I1028 11:25:40.862819    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:43.112867    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:43.113113    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:43.113113    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:45.850010    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:45.850010    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:45.855986    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:25:45.856863    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.248.250 22 <nil> <nil>}
	I1028 11:25:45.856941    3404 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:25:45.982599    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730114745.994965636
	
	I1028 11:25:45.982599    3404 fix.go:216] guest clock: 1730114745.994965636
	I1028 11:25:45.982599    3404 fix.go:229] Guest: 2024-10-28 11:25:45.994965636 +0000 UTC Remote: 2024-10-28 11:25:40.8626619 +0000 UTC m=+137.016616101 (delta=5.132303736s)
	I1028 11:25:45.982599    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:48.277991    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:48.277991    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:48.278927    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:50.980005    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:50.980477    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:50.986024    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:25:50.986664    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.248.250 22 <nil> <nil>}
	I1028 11:25:50.986664    3404 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1730114745
	I1028 11:25:51.136121    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 28 11:25:45 UTC 2024
	
	I1028 11:25:51.136121    3404 fix.go:236] clock set: Mon Oct 28 11:25:45 UTC 2024
	 (err=<nil>)
	I1028 11:25:51.136121    3404 start.go:83] releasing machines lock for "ha-201400", held for 2m21.2777595s
	I1028 11:25:51.136121    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:53.372266    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:53.372266    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:53.372266    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:56.020790    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:25:56.020790    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:56.026784    3404 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1028 11:25:56.026942    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:56.036676    3404 ssh_runner.go:195] Run: cat /version.json
	I1028 11:25:56.037227    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:25:58.308577    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:58.308577    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:58.308577    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:25:58.320156    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:25:58.320156    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:25:58.320156    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:26:01.087448    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:26:01.087448    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:01.087986    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:26:01.148274    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:26:01.148424    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:01.148424    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:26:01.179724    3404 ssh_runner.go:235] Completed: cat /version.json: (5.14299s)
	I1028 11:26:01.192035    3404 ssh_runner.go:195] Run: systemctl --version
	I1028 11:26:01.198312    3404 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1713908s)
	W1028 11:26:01.198312    3404 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1028 11:26:01.215361    3404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:26:01.226050    3404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:26:01.237961    3404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:26:01.271296    3404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:26:01.271355    3404 start.go:495] detecting cgroup driver to use...
	I1028 11:26:01.271406    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1028 11:26:01.302739    3404 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1028 11:26:01.302739    3404 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1028 11:26:01.326760    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1028 11:26:01.366594    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 11:26:01.387384    3404 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 11:26:01.398657    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 11:26:01.433340    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:26:01.469680    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 11:26:01.504099    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:26:01.541281    3404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:26:01.575337    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 11:26:01.607447    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 11:26:01.640962    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 11:26:01.673727    3404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:26:01.693418    3404 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:26:01.705052    3404 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:26:01.739561    3404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:26:01.768235    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:26:01.996681    3404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 11:26:02.028859    3404 start.go:495] detecting cgroup driver to use...
	I1028 11:26:02.040484    3404 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 11:26:02.079820    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:26:02.117997    3404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:26:02.160672    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:26:02.200954    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 11:26:02.238318    3404 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1028 11:26:02.300890    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 11:26:02.325769    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:26:02.376682    3404 ssh_runner.go:195] Run: which cri-dockerd
	I1028 11:26:02.393651    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 11:26:02.412029    3404 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1028 11:26:02.455275    3404 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 11:26:02.696531    3404 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 11:26:02.891957    3404 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 11:26:02.891957    3404 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 11:26:02.934953    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:26:03.164029    3404 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 11:26:05.768119    3404 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6040159s)
	I1028 11:26:05.780334    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 11:26:05.820842    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 11:26:05.858384    3404 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 11:26:06.074653    3404 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 11:26:06.281331    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:26:06.479459    3404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 11:26:06.523960    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 11:26:06.561086    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:26:06.772706    3404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 11:26:06.895054    3404 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 11:26:06.907221    3404 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 11:26:06.916450    3404 start.go:563] Will wait 60s for crictl version
	I1028 11:26:06.928147    3404 ssh_runner.go:195] Run: which crictl
	I1028 11:26:06.946215    3404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:26:07.003964    3404 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1028 11:26:07.013272    3404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 11:26:07.067503    3404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 11:26:07.108854    3404 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1028 11:26:07.108854    3404 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1028 11:26:07.113922    3404 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1028 11:26:07.113973    3404 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1028 11:26:07.113973    3404 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1028 11:26:07.113973    3404 ip.go:211] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:23:7f:d2 Flags:up|broadcast|multicast|running}
	I1028 11:26:07.117380    3404 ip.go:214] interface addr: fe80::866e:dfb9:193a:741e/64
	I1028 11:26:07.117380    3404 ip.go:214] interface addr: 172.27.240.1/20
	I1028 11:26:07.131924    3404 ssh_runner.go:195] Run: grep 172.27.240.1	host.minikube.internal$ /etc/hosts
	I1028 11:26:07.138514    3404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:26:07.171895    3404 kubeadm.go:883] updating cluster {Name:ha-201400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.248.250 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:26:07.171895    3404 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 11:26:07.181258    3404 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 11:26:07.205286    3404 docker.go:689] Got preloaded images: 
	I1028 11:26:07.205338    3404 docker.go:695] registry.k8s.io/kube-apiserver:v1.31.2 wasn't preloaded
	I1028 11:26:07.217508    3404 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 11:26:07.248905    3404 ssh_runner.go:195] Run: which lz4
	I1028 11:26:07.257257    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1028 11:26:07.272547    3404 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 11:26:07.279996    3404 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 11:26:07.280167    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (343199686 bytes)
	I1028 11:26:09.168462    3404 docker.go:653] duration metric: took 1.9111828s to copy over tarball
	I1028 11:26:09.181877    3404 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 11:26:17.265329    3404 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.0833599s)
	I1028 11:26:17.265329    3404 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 11:26:17.348168    3404 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 11:26:17.367439    3404 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1028 11:26:18.759399    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:26:18.973666    3404 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 11:26:21.628547    3404 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6548509s)
	I1028 11:26:21.639809    3404 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 11:26:21.672280    3404 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 11:26:21.672443    3404 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:26:21.672443    3404 kubeadm.go:934] updating node { 172.27.248.250 8443 v1.31.2 docker true true} ...
	I1028 11:26:21.672702    3404 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-201400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.248.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:26:21.682254    3404 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1028 11:26:21.755750    3404 cni.go:84] Creating CNI manager for ""
	I1028 11:26:21.755832    3404 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:26:21.755872    3404 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:26:21.755938    3404 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.248.250 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-201400 NodeName:ha-201400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.248.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.248.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:26:21.756136    3404 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.248.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-201400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.27.248.250"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.248.250"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:26:21.756283    3404 kube-vip.go:115] generating kube-vip config ...
	I1028 11:26:21.767851    3404 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:26:21.795259    3404 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:26:21.795387    3404 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:26:21.806835    3404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:26:21.826970    3404 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:26:21.838537    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 11:26:21.858302    3404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1028 11:26:21.891379    3404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:26:21.921780    3404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1028 11:26:21.956895    3404 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1028 11:26:22.002521    3404 ssh_runner.go:195] Run: grep 172.27.255.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:26:22.009560    3404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:26:22.045830    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:26:22.261348    3404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:26:22.298295    3404 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400 for IP: 172.27.248.250
	I1028 11:26:22.298388    3404 certs.go:194] generating shared ca certs ...
	I1028 11:26:22.298447    3404 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:22.298571    3404 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I1028 11:26:22.298571    3404 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I1028 11:26:22.298571    3404 certs.go:256] generating profile certs ...
	I1028 11:26:22.298571    3404 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\client.key
	I1028 11:26:22.298571    3404 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\client.crt with IP's: []
	I1028 11:26:22.361747    3404 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\client.crt ...
	I1028 11:26:22.361747    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\client.crt: {Name:mkc73e42285e6173fedba85ce6073b39b49eaa4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:22.363617    3404 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\client.key ...
	I1028 11:26:22.363617    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\client.key: {Name:mk352d8d9096b4da61558569d3583a91f9774340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:22.364243    3404 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.6de85fe5
	I1028 11:26:22.365239    3404 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.6de85fe5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.248.250 172.27.255.254]
	I1028 11:26:22.598792    3404 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.6de85fe5 ...
	I1028 11:26:22.598792    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.6de85fe5: {Name:mkd2b27f659177c16b390d5504556630de468537 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:22.600209    3404 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.6de85fe5 ...
	I1028 11:26:22.600209    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.6de85fe5: {Name:mk9724f5d53c33b68a93a081cb10ad12cf0d1375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:22.601257    3404 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.6de85fe5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt
	I1028 11:26:22.614851    3404 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.6de85fe5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key
	I1028 11:26:22.616844    3404 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key
	I1028 11:26:22.617433    3404 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt with IP's: []
	I1028 11:26:22.912167    3404 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt ...
	I1028 11:26:22.913287    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt: {Name:mk5f04bf38ef925a1e509f5e1f07ddbecad69152 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:22.914873    3404 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key ...
	I1028 11:26:22.914873    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key: {Name:mkdba9b4bd7ac2bc479ed6817470eed2c30be6cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:22.915274    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:26:22.916390    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:26:22.916552    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:26:22.916661    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:26:22.916964    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:26:22.917191    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:26:22.917380    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:26:22.927650    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:26:22.928828    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem (1338 bytes)
	W1028 11:26:22.929512    3404 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608_empty.pem, impossibly tiny 0 bytes
	I1028 11:26:22.929512    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1028 11:26:22.929863    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1028 11:26:22.930365    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1028 11:26:22.930572    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1028 11:26:22.930924    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem (1708 bytes)
	I1028 11:26:22.930924    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:26:22.930924    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem -> /usr/share/ca-certificates/9608.pem
	I1028 11:26:22.930924    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /usr/share/ca-certificates/96082.pem
	I1028 11:26:22.933328    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:26:22.987072    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:26:23.035501    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:26:23.091383    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:26:23.143544    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 11:26:23.195907    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:26:23.245549    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:26:23.292533    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:26:23.336317    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:26:23.377946    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem --> /usr/share/ca-certificates/9608.pem (1338 bytes)
	I1028 11:26:23.424369    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /usr/share/ca-certificates/96082.pem (1708 bytes)
	I1028 11:26:23.475936    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:26:23.523304    3404 ssh_runner.go:195] Run: openssl version
	I1028 11:26:23.546779    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:26:23.583281    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:26:23.591819    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:26:23.601651    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:26:23.625357    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:26:23.656703    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9608.pem && ln -fs /usr/share/ca-certificates/9608.pem /etc/ssl/certs/9608.pem"
	I1028 11:26:23.690110    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9608.pem
	I1028 11:26:23.697841    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:03 /usr/share/ca-certificates/9608.pem
	I1028 11:26:23.707741    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9608.pem
	I1028 11:26:23.729094    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9608.pem /etc/ssl/certs/51391683.0"
	I1028 11:26:23.761341    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96082.pem && ln -fs /usr/share/ca-certificates/96082.pem /etc/ssl/certs/96082.pem"
	I1028 11:26:23.792722    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96082.pem
	I1028 11:26:23.799806    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:03 /usr/share/ca-certificates/96082.pem
	I1028 11:26:23.810899    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96082.pem
	I1028 11:26:23.832216    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96082.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:26:23.864349    3404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:26:23.871370    3404 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:26:23.871904    3404 kubeadm.go:392] StartCluster: {Name:ha-201400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clu
sterName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.248.250 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:26:23.881233    3404 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 11:26:23.917901    3404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 11:26:23.952239    3404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 11:26:23.982231    3404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 11:26:24.002195    3404 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 11:26:24.002195    3404 kubeadm.go:157] found existing configuration files:
	
	I1028 11:26:24.012232    3404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 11:26:24.034200    3404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 11:26:24.048200    3404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 11:26:24.086190    3404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 11:26:24.101998    3404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 11:26:24.117170    3404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 11:26:24.150337    3404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 11:26:24.168916    3404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 11:26:24.181040    3404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 11:26:24.210838    3404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 11:26:24.228476    3404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 11:26:24.238470    3404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 11:26:24.257899    3404 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 11:26:24.720246    3404 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 11:26:40.392712    3404 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 11:26:40.392899    3404 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 11:26:40.393130    3404 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 11:26:40.393258    3404 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 11:26:40.393625    3404 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 11:26:40.393817    3404 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 11:26:40.397274    3404 out.go:235]   - Generating certificates and keys ...
	I1028 11:26:40.397624    3404 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 11:26:40.397836    3404 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 11:26:40.398036    3404 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 11:26:40.398274    3404 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 11:26:40.398416    3404 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 11:26:40.398663    3404 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 11:26:40.399087    3404 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 11:26:40.399540    3404 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-201400 localhost] and IPs [172.27.248.250 127.0.0.1 ::1]
	I1028 11:26:40.399711    3404 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 11:26:40.399711    3404 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-201400 localhost] and IPs [172.27.248.250 127.0.0.1 ::1]
	I1028 11:26:40.399711    3404 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 11:26:40.399711    3404 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 11:26:40.400336    3404 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 11:26:40.400336    3404 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 11:26:40.400500    3404 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 11:26:40.400500    3404 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 11:26:40.400500    3404 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 11:26:40.400500    3404 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 11:26:40.401144    3404 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 11:26:40.401176    3404 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 11:26:40.401176    3404 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 11:26:40.403707    3404 out.go:235]   - Booting up control plane ...
	I1028 11:26:40.404357    3404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 11:26:40.404469    3404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 11:26:40.404469    3404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 11:26:40.404469    3404 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 11:26:40.405069    3404 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 11:26:40.405069    3404 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 11:26:40.405069    3404 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 11:26:40.405704    3404 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 11:26:40.405704    3404 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002597397s
	I1028 11:26:40.406287    3404 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 11:26:40.406287    3404 kubeadm.go:310] [api-check] The API server is healthy after 8.854898091s
	I1028 11:26:40.406287    3404 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 11:26:40.406287    3404 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 11:26:40.406287    3404 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 11:26:40.407400    3404 kubeadm.go:310] [mark-control-plane] Marking the node ha-201400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 11:26:40.407400    3404 kubeadm.go:310] [bootstrap-token] Using token: ur7fzz.cobvstbgnh3qhf27
	I1028 11:26:40.409992    3404 out.go:235]   - Configuring RBAC rules ...
	I1028 11:26:40.409992    3404 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 11:26:40.409992    3404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 11:26:40.410947    3404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 11:26:40.410947    3404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 11:26:40.410947    3404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 11:26:40.410947    3404 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 11:26:40.411958    3404 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 11:26:40.411958    3404 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 11:26:40.411958    3404 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 11:26:40.411958    3404 kubeadm.go:310] 
	I1028 11:26:40.411958    3404 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 11:26:40.411958    3404 kubeadm.go:310] 
	I1028 11:26:40.411958    3404 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 11:26:40.411958    3404 kubeadm.go:310] 
	I1028 11:26:40.411958    3404 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 11:26:40.412983    3404 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 11:26:40.413060    3404 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 11:26:40.413060    3404 kubeadm.go:310] 
	I1028 11:26:40.413060    3404 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 11:26:40.413267    3404 kubeadm.go:310] 
	I1028 11:26:40.413267    3404 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 11:26:40.413267    3404 kubeadm.go:310] 
	I1028 11:26:40.413481    3404 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 11:26:40.413610    3404 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 11:26:40.413610    3404 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 11:26:40.413610    3404 kubeadm.go:310] 
	I1028 11:26:40.413610    3404 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 11:26:40.414333    3404 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 11:26:40.414395    3404 kubeadm.go:310] 
	I1028 11:26:40.414562    3404 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ur7fzz.cobvstbgnh3qhf27 \
	I1028 11:26:40.414829    3404 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b \
	I1028 11:26:40.414829    3404 kubeadm.go:310] 	--control-plane 
	I1028 11:26:40.414829    3404 kubeadm.go:310] 
	I1028 11:26:40.415245    3404 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 11:26:40.415306    3404 kubeadm.go:310] 
	I1028 11:26:40.415542    3404 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ur7fzz.cobvstbgnh3qhf27 \
	I1028 11:26:40.415779    3404 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b 
	I1028 11:26:40.415779    3404 cni.go:84] Creating CNI manager for ""
	I1028 11:26:40.415779    3404 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:26:40.419458    3404 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 11:26:40.441295    3404 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 11:26:40.450633    3404 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 11:26:40.450633    3404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 11:26:40.514309    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 11:26:41.386419    3404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 11:26:41.400339    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:26:41.400339    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-201400 minikube.k8s.io/updated_at=2024_10_28T11_26_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-201400 minikube.k8s.io/primary=true
	I1028 11:26:41.444480    3404 ops.go:34] apiserver oom_adj: -16
	I1028 11:26:41.692663    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:26:42.194100    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:26:42.694196    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:26:43.193901    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:26:43.694160    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:26:44.196227    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:26:44.373890    3404 kubeadm.go:1113] duration metric: took 2.9874367s to wait for elevateKubeSystemPrivileges
	I1028 11:26:44.374020    3404 kubeadm.go:394] duration metric: took 20.5017543s to StartCluster
	I1028 11:26:44.374068    3404 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:44.374306    3404 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 11:26:44.375789    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:44.377505    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 11:26:44.377505    3404 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.248.250 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:26:44.377505    3404 start.go:241] waiting for startup goroutines ...
	I1028 11:26:44.377505    3404 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 11:26:44.377505    3404 addons.go:69] Setting storage-provisioner=true in profile "ha-201400"
	I1028 11:26:44.377505    3404 addons.go:69] Setting default-storageclass=true in profile "ha-201400"
	I1028 11:26:44.378127    3404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-201400"
	I1028 11:26:44.377505    3404 addons.go:234] Setting addon storage-provisioner=true in "ha-201400"
	I1028 11:26:44.378200    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:26:44.378345    3404 host.go:66] Checking if "ha-201400" exists ...
	I1028 11:26:44.378752    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:26:44.379729    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:26:44.671126    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 11:26:45.248868    3404 start.go:971] {"host.minikube.internal": 172.27.240.1} host record injected into CoreDNS's ConfigMap
	I1028 11:26:46.726590    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:26:46.726714    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:46.727515    3404 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 11:26:46.727515    3404 kapi.go:59] client config for ha-201400: &rest.Config{Host:"https://172.27.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-201400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-201400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29cb3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 11:26:46.730750    3404 addons.go:234] Setting addon default-storageclass=true in "ha-201400"
	I1028 11:26:46.730939    3404 host.go:66] Checking if "ha-201400" exists ...
	I1028 11:26:46.732008    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:26:46.732273    3404 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 11:26:46.785520    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:26:46.785520    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:46.803181    3404 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:26:46.806490    3404 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:26:46.806490    3404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:26:46.806581    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:26:49.157130    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:26:49.157130    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:49.157407    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:26:49.250186    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:26:49.250778    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:49.250986    3404 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:26:49.250986    3404 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:26:49.251126    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:26:51.578904    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:26:51.578904    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:51.578988    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:26:51.999136    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:26:51.999864    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:52.000363    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:26:52.170502    3404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:26:54.349037    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:26:54.349091    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:54.349091    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:26:54.488132    3404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:26:54.690078    3404 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 11:26:54.690078    3404 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 11:26:54.690954    3404 round_trippers.go:463] GET https://172.27.255.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1028 11:26:54.690954    3404 round_trippers.go:469] Request Headers:
	I1028 11:26:54.690954    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:26:54.690954    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:26:54.707478    3404 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1028 11:26:54.708398    3404 round_trippers.go:463] PUT https://172.27.255.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 11:26:54.708398    3404 round_trippers.go:469] Request Headers:
	I1028 11:26:54.708398    3404 round_trippers.go:473]     Content-Type: application/json
	I1028 11:26:54.708398    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:26:54.708398    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:26:54.712648    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:26:54.715845    3404 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 11:26:54.719644    3404 addons.go:510] duration metric: took 10.3420229s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 11:26:54.719644    3404 start.go:246] waiting for cluster config update ...
	I1028 11:26:54.719644    3404 start.go:255] writing updated cluster config ...
	I1028 11:26:54.722734    3404 out.go:201] 
	I1028 11:26:54.741658    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:26:54.741806    3404 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json ...
	I1028 11:26:54.747673    3404 out.go:177] * Starting "ha-201400-m02" control-plane node in "ha-201400" cluster
	I1028 11:26:54.750222    3404 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 11:26:54.750222    3404 cache.go:56] Caching tarball of preloaded images
	I1028 11:26:54.750222    3404 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1028 11:26:54.750776    3404 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 11:26:54.750994    3404 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json ...
	I1028 11:26:54.762883    3404 start.go:360] acquireMachinesLock for ha-201400-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:26:54.762883    3404 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-201400-m02"
	I1028 11:26:54.762883    3404 start.go:93] Provisioning new machine with config: &{Name:ha-201400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.2 ClusterName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.248.250 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:26:54.762883    3404 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1028 11:26:54.767460    3404 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:26:54.767460    3404 start.go:159] libmachine.API.Create for "ha-201400" (driver="hyperv")
	I1028 11:26:54.768021    3404 client.go:168] LocalClient.Create starting
	I1028 11:26:54.768233    3404 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I1028 11:26:54.768233    3404 main.go:141] libmachine: Decoding PEM data...
	I1028 11:26:54.768233    3404 main.go:141] libmachine: Parsing certificate...
	I1028 11:26:54.769022    3404 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I1028 11:26:54.769251    3404 main.go:141] libmachine: Decoding PEM data...
	I1028 11:26:54.769251    3404 main.go:141] libmachine: Parsing certificate...
	I1028 11:26:54.769251    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1028 11:26:56.829949    3404 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1028 11:26:56.829949    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:56.830054    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1028 11:26:58.706617    3404 main.go:141] libmachine: [stdout =====>] : False
	
	I1028 11:26:58.707402    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:26:58.707504    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1028 11:27:00.296323    3404 main.go:141] libmachine: [stdout =====>] : True
	
	I1028 11:27:00.296685    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:00.296685    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1028 11:27:04.100440    3404 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1028 11:27:04.100440    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:04.103637    3404 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:27:04.626055    3404 main.go:141] libmachine: Creating SSH key...
	I1028 11:27:04.802036    3404 main.go:141] libmachine: Creating VM...
	I1028 11:27:04.802036    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1028 11:27:07.855942    3404 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1028 11:27:07.855942    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:07.855942    3404 main.go:141] libmachine: Using switch "Default Switch"
	I1028 11:27:07.855942    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1028 11:27:09.820105    3404 main.go:141] libmachine: [stdout =====>] : True
	
	I1028 11:27:09.820324    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:09.820459    3404 main.go:141] libmachine: Creating VHD
	I1028 11:27:09.820459    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1028 11:27:13.667051    3404 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 32F6B9B4-1EAE-4BBC-AB35-E730EDC8FD37
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1028 11:27:13.667051    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:13.667051    3404 main.go:141] libmachine: Writing magic tar header
	I1028 11:27:13.667051    3404 main.go:141] libmachine: Writing SSH key tar header
	I1028 11:27:13.679270    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1028 11:27:16.961999    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:16.961999    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:16.962148    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\disk.vhd' -SizeBytes 20000MB
	I1028 11:27:19.610411    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:19.610411    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:19.610411    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-201400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1028 11:27:23.376062    3404 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-201400-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1028 11:27:23.376206    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:23.376259    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-201400-m02 -DynamicMemoryEnabled $false
	I1028 11:27:25.750743    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:25.751024    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:25.751209    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-201400-m02 -Count 2
	I1028 11:27:28.019983    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:28.020659    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:28.020659    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-201400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\boot2docker.iso'
	I1028 11:27:30.730521    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:30.730521    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:30.731242    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-201400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\disk.vhd'
	I1028 11:27:33.515880    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:33.516296    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:33.516296    3404 main.go:141] libmachine: Starting VM...
	I1028 11:27:33.516360    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-201400-m02
	I1028 11:27:36.777107    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:36.777107    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:36.777107    3404 main.go:141] libmachine: Waiting for host to start...
	I1028 11:27:36.777107    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:27:39.171191    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:27:39.171191    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:39.171450    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:27:41.858125    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:41.858125    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:42.858440    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:27:45.202781    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:27:45.202984    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:45.202984    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:27:47.874624    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:47.875626    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:48.876621    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:27:51.217419    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:27:51.217419    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:51.218097    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:27:53.870442    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:53.870442    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:54.871157    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:27:57.207369    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:27:57.207369    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:27:57.207449    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:27:59.819718    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:27:59.819718    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:00.820964    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:03.213229    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:03.213229    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:03.213229    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:06.031533    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:06.031533    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:06.031533    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:08.464115    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:08.464756    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:08.464756    3404 machine.go:93] provisionDockerMachine start ...
	I1028 11:28:08.464915    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:10.856985    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:10.856985    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:10.857795    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:13.648224    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:13.648224    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:13.654295    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:28:13.669739    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.174 22 <nil> <nil>}
	I1028 11:28:13.669830    3404 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 11:28:13.807194    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 11:28:13.807299    3404 buildroot.go:166] provisioning hostname "ha-201400-m02"
	I1028 11:28:13.807389    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:16.112936    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:16.113106    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:16.113106    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:18.888769    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:18.888769    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:18.895829    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:28:18.896572    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.174 22 <nil> <nil>}
	I1028 11:28:18.896572    3404 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-201400-m02 && echo "ha-201400-m02" | sudo tee /etc/hostname
	I1028 11:28:19.075659    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-201400-m02
	
	I1028 11:28:19.075748    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:21.475184    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:21.475184    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:21.475184    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:24.215339    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:24.215339    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:24.220660    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:28:24.221828    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.174 22 <nil> <nil>}
	I1028 11:28:24.221828    3404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-201400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-201400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-201400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:28:24.372236    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:28:24.372299    3404 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I1028 11:28:24.372363    3404 buildroot.go:174] setting up certificates
	I1028 11:28:24.372363    3404 provision.go:84] configureAuth start
	I1028 11:28:24.372489    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:26.626891    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:26.626891    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:26.626891    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:29.344159    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:29.344961    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:29.344961    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:31.634677    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:31.634677    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:31.634677    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:34.390067    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:34.390895    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:34.390895    3404 provision.go:143] copyHostCerts
	I1028 11:28:34.391041    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I1028 11:28:34.391041    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I1028 11:28:34.391041    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I1028 11:28:34.391740    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1028 11:28:34.392945    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I1028 11:28:34.393221    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I1028 11:28:34.393221    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I1028 11:28:34.393555    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1028 11:28:34.394843    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I1028 11:28:34.395085    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I1028 11:28:34.395222    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I1028 11:28:34.395614    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I1028 11:28:34.396799    3404 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-201400-m02 san=[127.0.0.1 172.27.250.174 ha-201400-m02 localhost minikube]
	I1028 11:28:34.834801    3404 provision.go:177] copyRemoteCerts
	I1028 11:28:34.845751    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:28:34.845751    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:37.169582    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:37.169697    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:37.169825    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:39.868014    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:39.868014    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:39.868862    3404 sshutil.go:53] new ssh client: &{IP:172.27.250.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\id_rsa Username:docker}
	I1028 11:28:39.986833    3404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1410237s)
	I1028 11:28:39.986955    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1028 11:28:39.987333    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:28:40.053103    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1028 11:28:40.053103    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 11:28:40.107701    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1028 11:28:40.108707    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:28:40.163487    3404 provision.go:87] duration metric: took 15.7908827s to configureAuth
	I1028 11:28:40.163546    3404 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:28:40.164176    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:28:40.164176    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:42.431210    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:42.431210    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:42.431325    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:45.252182    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:45.252182    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:45.258901    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:28:45.259092    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.174 22 <nil> <nil>}
	I1028 11:28:45.259092    3404 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 11:28:45.391961    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 11:28:45.392025    3404 buildroot.go:70] root file system type: tmpfs
	I1028 11:28:45.392278    3404 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 11:28:45.392278    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:47.684389    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:47.684869    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:47.684869    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:50.441568    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:50.442597    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:50.448703    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:28:50.449542    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.174 22 <nil> <nil>}
	I1028 11:28:50.449542    3404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.248.250"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 11:28:50.622070    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.248.250
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 11:28:50.622670    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:28:52.913976    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:28:52.913976    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:52.913976    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:28:55.675241    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:28:55.675241    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:28:55.684951    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:28:55.685429    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.174 22 <nil> <nil>}
	I1028 11:28:55.685504    3404 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 11:28:58.023426    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1028 11:28:58.023426    3404 machine.go:96] duration metric: took 49.5581113s to provisionDockerMachine
	I1028 11:28:58.023426    3404 client.go:171] duration metric: took 2m3.2539482s to LocalClient.Create
	I1028 11:28:58.023426    3404 start.go:167] duration metric: took 2m3.2545746s to libmachine.API.Create "ha-201400"
	I1028 11:28:58.023426    3404 start.go:293] postStartSetup for "ha-201400-m02" (driver="hyperv")
	I1028 11:28:58.023426    3404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:28:58.037003    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:28:58.037003    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:29:00.339341    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:00.339398    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:00.339398    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:03.070256    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:29:03.071306    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:03.072038    3404 sshutil.go:53] new ssh client: &{IP:172.27.250.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\id_rsa Username:docker}
	I1028 11:29:03.192445    3404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1553838s)
	I1028 11:29:03.204331    3404 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:29:03.211286    3404 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:29:03.211286    3404 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I1028 11:29:03.211286    3404 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I1028 11:29:03.212822    3404 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> 96082.pem in /etc/ssl/certs
	I1028 11:29:03.212822    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /etc/ssl/certs/96082.pem
	I1028 11:29:03.224479    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:29:03.244804    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /etc/ssl/certs/96082.pem (1708 bytes)
	I1028 11:29:03.296290    3404 start.go:296] duration metric: took 5.2728046s for postStartSetup
	I1028 11:29:03.299506    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:29:05.582782    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:05.582969    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:05.583589    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:08.257122    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:29:08.257122    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:08.257758    3404 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json ...
	I1028 11:29:08.260394    3404 start.go:128] duration metric: took 2m13.496005s to createHost
	I1028 11:29:08.260394    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:29:10.542578    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:10.543311    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:10.543378    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:13.220449    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:29:13.220449    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:13.226427    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:29:13.226842    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.174 22 <nil> <nil>}
	I1028 11:29:13.226842    3404 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:29:13.364434    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730114953.378785503
	
	I1028 11:29:13.364434    3404 fix.go:216] guest clock: 1730114953.378785503
	I1028 11:29:13.364434    3404 fix.go:229] Guest: 2024-10-28 11:29:13.378785503 +0000 UTC Remote: 2024-10-28 11:29:08.2603949 +0000 UTC m=+344.412007101 (delta=5.118390603s)
	I1028 11:29:13.364434    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:29:15.676995    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:15.677775    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:15.677775    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:18.403533    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:29:18.403533    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:18.409919    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:29:18.410499    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.250.174 22 <nil> <nil>}
	I1028 11:29:18.410499    3404 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1730114953
	I1028 11:29:18.554784    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 28 11:29:13 UTC 2024
	
	I1028 11:29:18.554784    3404 fix.go:236] clock set: Mon Oct 28 11:29:13 UTC 2024
	 (err=<nil>)
	I1028 11:29:18.554784    3404 start.go:83] releasing machines lock for "ha-201400-m02", held for 2m23.790278s
	I1028 11:29:18.555149    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:29:20.831634    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:20.831634    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:20.831634    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:23.544539    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:29:23.544539    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:23.548487    3404 out.go:177] * Found network options:
	I1028 11:29:23.551574    3404 out.go:177]   - NO_PROXY=172.27.248.250
	W1028 11:29:23.554334    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:29:23.559173    3404 out.go:177]   - NO_PROXY=172.27.248.250
	W1028 11:29:23.562136    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:29:23.563540    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:29:23.565457    3404 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1028 11:29:23.565457    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:29:23.574405    3404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 11:29:23.574405    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m02 ).state
	I1028 11:29:25.897479    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:25.897479    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:25.897479    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:25.903822    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:25.903822    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:25.903951    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m02 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:28.677538    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:29:28.677538    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:28.678042    3404 sshutil.go:53] new ssh client: &{IP:172.27.250.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\id_rsa Username:docker}
	I1028 11:29:28.757700    3404 main.go:141] libmachine: [stdout =====>] : 172.27.250.174
	
	I1028 11:29:28.758400    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:28.758458    3404 sshutil.go:53] new ssh client: &{IP:172.27.250.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m02\id_rsa Username:docker}
	I1028 11:29:28.776680    3404 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2022155s)
	W1028 11:29:28.776680    3404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:29:28.789888    3404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:29:28.818963    3404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:29:28.818963    3404 start.go:495] detecting cgroup driver to use...
	I1028 11:29:28.818963    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:29:28.827198    3404 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.261682s)
	W1028 11:29:28.827198    3404 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1028 11:29:28.876276    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1028 11:29:28.910874    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 11:29:28.931072    3404 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 11:29:28.943403    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1028 11:29:28.961264    3404 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1028 11:29:28.961264    3404 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1028 11:29:28.976649    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:29:29.011154    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 11:29:29.044776    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:29:29.092222    3404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:29:29.129462    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 11:29:29.163754    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 11:29:29.198945    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 11:29:29.233489    3404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:29:29.255753    3404 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:29:29.268071    3404 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:29:29.303310    3404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:29:29.333300    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:29:29.545275    3404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 11:29:29.578357    3404 start.go:495] detecting cgroup driver to use...
	I1028 11:29:29.590766    3404 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 11:29:29.627278    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:29:29.663367    3404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:29:29.733594    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:29:29.773102    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 11:29:29.814070    3404 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1028 11:29:29.901296    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 11:29:29.931996    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:29:29.986927    3404 ssh_runner.go:195] Run: which cri-dockerd
	I1028 11:29:30.005514    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 11:29:30.025769    3404 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1028 11:29:30.085203    3404 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 11:29:30.317291    3404 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 11:29:30.518281    3404 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 11:29:30.518400    3404 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 11:29:30.568016    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:29:30.779162    3404 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 11:29:33.389791    3404 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6105997s)
	I1028 11:29:33.402679    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 11:29:33.442916    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 11:29:33.480397    3404 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 11:29:33.697651    3404 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 11:29:33.912693    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:29:34.123465    3404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 11:29:34.172847    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 11:29:34.215095    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:29:34.438839    3404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 11:29:34.555007    3404 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 11:29:34.567577    3404 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 11:29:34.577206    3404 start.go:563] Will wait 60s for crictl version
	I1028 11:29:34.590283    3404 ssh_runner.go:195] Run: which crictl
	I1028 11:29:34.608633    3404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:29:34.686848    3404 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1028 11:29:34.696815    3404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 11:29:34.746268    3404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 11:29:34.788565    3404 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1028 11:29:34.790578    3404 out.go:177]   - env NO_PROXY=172.27.248.250
	I1028 11:29:34.793565    3404 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1028 11:29:34.798566    3404 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1028 11:29:34.798566    3404 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1028 11:29:34.798566    3404 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1028 11:29:34.798566    3404 ip.go:211] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:23:7f:d2 Flags:up|broadcast|multicast|running}
	I1028 11:29:34.801566    3404 ip.go:214] interface addr: fe80::866e:dfb9:193a:741e/64
	I1028 11:29:34.801566    3404 ip.go:214] interface addr: 172.27.240.1/20
	I1028 11:29:34.812567    3404 ssh_runner.go:195] Run: grep 172.27.240.1	host.minikube.internal$ /etc/hosts
	I1028 11:29:34.819067    3404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:29:34.843628    3404 mustload.go:65] Loading cluster: ha-201400
	I1028 11:29:34.844431    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:29:34.844975    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:29:37.118525    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:37.118525    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:37.118525    3404 host.go:66] Checking if "ha-201400" exists ...
	I1028 11:29:37.119177    3404 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400 for IP: 172.27.250.174
	I1028 11:29:37.119177    3404 certs.go:194] generating shared ca certs ...
	I1028 11:29:37.119177    3404 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:29:37.119879    3404 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I1028 11:29:37.119879    3404 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I1028 11:29:37.120536    3404 certs.go:256] generating profile certs ...
	I1028 11:29:37.120536    3404 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\client.key
	I1028 11:29:37.121109    3404 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.bf16c6db
	I1028 11:29:37.121364    3404 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.bf16c6db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.248.250 172.27.250.174 172.27.255.254]
	I1028 11:29:37.351648    3404 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.bf16c6db ...
	I1028 11:29:37.351648    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.bf16c6db: {Name:mkc25ff31e988b8df10b3ffb0ba6e4f6e901478b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:29:37.353626    3404 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.bf16c6db ...
	I1028 11:29:37.353626    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.bf16c6db: {Name:mk59b62ce762b421cd03d39be8b38667a90ff6d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:29:37.354289    3404 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.bf16c6db -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt
	I1028 11:29:37.370213    3404 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.bf16c6db -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key
	I1028 11:29:37.372220    3404 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key
	I1028 11:29:37.372220    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:29:37.372220    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:29:37.372220    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:29:37.373215    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:29:37.373215    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:29:37.373215    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:29:37.373215    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:29:37.373215    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:29:37.374221    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem (1338 bytes)
	W1028 11:29:37.374221    3404 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608_empty.pem, impossibly tiny 0 bytes
	I1028 11:29:37.374221    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1028 11:29:37.374221    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1028 11:29:37.374221    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1028 11:29:37.375667    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1028 11:29:37.376307    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem (1708 bytes)
	I1028 11:29:37.376701    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:29:37.376902    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem -> /usr/share/ca-certificates/9608.pem
	I1028 11:29:37.377226    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /usr/share/ca-certificates/96082.pem
	I1028 11:29:37.377226    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:29:39.624399    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:39.624592    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:39.624661    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:42.328260    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:29:42.328260    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:42.328844    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:29:42.435833    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:29:42.445230    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:29:42.477996    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:29:42.485781    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 11:29:42.524646    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:29:42.532360    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:29:42.565164    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:29:42.573160    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:29:42.607661    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:29:42.614190    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:29:42.647035    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:29:42.653640    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:29:42.674361    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:29:42.725662    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:29:42.778833    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:29:42.839953    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:29:42.888963    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 11:29:42.938465    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:29:42.987636    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:29:43.035758    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:29:43.088857    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:29:43.142444    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem --> /usr/share/ca-certificates/9608.pem (1338 bytes)
	I1028 11:29:43.192448    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /usr/share/ca-certificates/96082.pem (1708 bytes)
	I1028 11:29:43.243043    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:29:43.277462    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 11:29:43.310856    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:29:43.349308    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:29:43.385202    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:29:43.426286    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:29:43.467007    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:29:43.513436    3404 ssh_runner.go:195] Run: openssl version
	I1028 11:29:43.535531    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:29:43.568535    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:29:43.575201    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:29:43.587727    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:29:43.608324    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:29:43.639950    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9608.pem && ln -fs /usr/share/ca-certificates/9608.pem /etc/ssl/certs/9608.pem"
	I1028 11:29:43.675552    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9608.pem
	I1028 11:29:43.682695    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:03 /usr/share/ca-certificates/9608.pem
	I1028 11:29:43.694765    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9608.pem
	I1028 11:29:43.715777    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9608.pem /etc/ssl/certs/51391683.0"
	I1028 11:29:43.747358    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96082.pem && ln -fs /usr/share/ca-certificates/96082.pem /etc/ssl/certs/96082.pem"
	I1028 11:29:43.780964    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96082.pem
	I1028 11:29:43.788082    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:03 /usr/share/ca-certificates/96082.pem
	I1028 11:29:43.799896    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96082.pem
	I1028 11:29:43.821274    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96082.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:29:43.854543    3404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:29:43.860832    3404 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:29:43.860832    3404 kubeadm.go:934] updating node {m02 172.27.250.174 8443 v1.31.2 docker true true} ...
	I1028 11:29:43.861372    3404 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-201400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.250.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:29:43.861430    3404 kube-vip.go:115] generating kube-vip config ...
	I1028 11:29:43.873321    3404 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:29:43.903097    3404 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:29:43.903176    3404 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:29:43.915803    3404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:29:43.935872    3404 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:29:43.947252    3404 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:29:43.970109    3404 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubectl
	I1028 11:29:43.970218    3404 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubelet
	I1028 11:29:43.970218    3404 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubeadm
	I1028 11:29:45.119963    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:29:45.132967    3404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:29:45.144925    3404 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:29:45.145610    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:29:45.411111    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:29:45.422104    3404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:29:45.437193    3404 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:29:45.437414    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:29:45.523389    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:29:45.581488    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:29:45.593484    3404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:29:45.610610    3404 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:29:45.610610    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:29:46.483786    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:29:46.502600    3404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1028 11:29:46.539821    3404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:29:46.571674    3404 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:29:46.614790    3404 ssh_runner.go:195] Run: grep 172.27.255.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:29:46.621997    3404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:29:46.655870    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:29:46.870364    3404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:29:46.903292    3404 host.go:66] Checking if "ha-201400" exists ...
	I1028 11:29:46.904033    3404 start.go:317] joinCluster: &{Name:ha-201400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.248.250 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.250.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:29:46.904033    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:29:46.904033    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:29:49.099609    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:29:49.099982    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:49.100088    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:29:51.818832    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:29:51.818832    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:29:51.819351    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:29:52.276138    3404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.3720439s)
	I1028 11:29:52.276138    3404 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.27.250.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:29:52.276138    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4qdjc.zt2t1z54vyly6fdz --discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-201400-m02 --control-plane --apiserver-advertise-address=172.27.250.174 --apiserver-bind-port=8443"
	I1028 11:30:37.343465    3404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4qdjc.zt2t1z54vyly6fdz --discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-201400-m02 --control-plane --apiserver-advertise-address=172.27.250.174 --apiserver-bind-port=8443": (45.0668178s)
	I1028 11:30:37.343525    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:30:38.168220    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-201400-m02 minikube.k8s.io/updated_at=2024_10_28T11_30_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-201400 minikube.k8s.io/primary=false
	I1028 11:30:38.396911    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-201400-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:30:38.582121    3404 start.go:319] duration metric: took 51.6775031s to joinCluster
	I1028 11:30:38.582196    3404 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.27.250.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:30:38.584044    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:30:38.588123    3404 out.go:177] * Verifying Kubernetes components...
	I1028 11:30:38.603562    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:30:39.073310    3404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:30:39.112696    3404 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 11:30:39.113586    3404 kapi.go:59] client config for ha-201400: &rest.Config{Host:"https://172.27.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-201400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-201400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29cb3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:30:39.113586    3404 kubeadm.go:483] Overriding stale ClientConfig host https://172.27.255.254:8443 with https://172.27.248.250:8443
	I1028 11:30:39.114644    3404 node_ready.go:35] waiting up to 6m0s for node "ha-201400-m02" to be "Ready" ...
	I1028 11:30:39.114644    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:39.114644    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:39.114644    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:39.114644    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:39.132097    3404 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1028 11:30:39.615565    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:39.615565    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:39.615565    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:39.615565    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:39.621955    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:40.115430    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:40.115430    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:40.115430    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:40.115430    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:40.122200    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:40.615588    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:40.615630    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:40.615670    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:40.615670    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:40.620386    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:30:41.115593    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:41.116268    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:41.116268    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:41.116268    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:41.121541    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:41.122815    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:41.615564    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:41.615564    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:41.615564    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:41.615564    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:41.621583    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:42.115978    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:42.115978    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:42.115978    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:42.116144    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:42.123656    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:30:42.615649    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:42.615649    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:42.615649    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:42.615649    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:42.622875    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:30:43.114836    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:43.114836    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:43.114836    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:43.114836    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:43.120285    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:43.615673    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:43.615673    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:43.615673    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:43.615673    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:43.623677    3404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 11:30:43.624666    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:44.115405    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:44.115405    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:44.115405    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:44.115405    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:44.121969    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:44.615248    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:44.615248    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:44.615248    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:44.615248    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:44.621225    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:45.115722    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:45.115722    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:45.115722    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:45.115722    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:45.133769    3404 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1028 11:30:45.615532    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:45.615600    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:45.615600    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:45.615600    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:45.624879    3404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:30:45.626181    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:46.115233    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:46.115233    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:46.115233    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:46.115233    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:46.159237    3404 round_trippers.go:574] Response Status: 200 OK in 44 milliseconds
	I1028 11:30:46.615360    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:46.615360    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:46.615360    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:46.615360    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:46.624874    3404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:30:47.115052    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:47.115052    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:47.115052    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:47.115052    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:47.120030    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:30:47.615970    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:47.616034    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:47.616095    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:47.616095    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:47.622485    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:48.115106    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:48.115106    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:48.115106    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:48.115106    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:48.120252    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:48.120252    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:48.615545    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:48.615545    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:48.615545    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:48.615545    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:48.622657    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:30:49.116164    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:49.116164    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:49.116164    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:49.116347    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:49.122761    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:49.614810    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:49.614810    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:49.614810    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:49.614810    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:49.620564    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:50.114989    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:50.114989    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:50.114989    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:50.114989    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:50.120830    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:50.121710    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:50.615723    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:50.616141    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:50.616141    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:50.616141    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:50.622622    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:51.114909    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:51.114909    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:51.114909    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:51.114909    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:51.121161    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:51.620225    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:51.620225    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:51.620225    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:51.620225    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:51.636623    3404 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1028 11:30:52.115494    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:52.115494    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:52.115494    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:52.115494    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:52.121658    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:52.122835    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:52.615746    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:52.615746    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:52.615746    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:52.615746    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:52.624411    3404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 11:30:53.115831    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:53.115945    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:53.115945    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:53.115945    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:53.121383    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:53.616170    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:53.616170    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:53.616170    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:53.616170    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:53.621282    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:54.115167    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:54.115167    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:54.115167    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:54.115167    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:54.136979    3404 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1028 11:30:54.140424    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:54.616048    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:54.616151    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:54.616151    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:54.616151    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:54.622586    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:55.115149    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:55.115695    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:55.115695    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:55.115695    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:55.122049    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:55.615525    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:55.615525    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:55.615525    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:55.615525    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:55.622274    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:56.115161    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:56.115161    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:56.115161    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:56.115161    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:56.120640    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:30:56.615852    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:56.616292    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:56.616292    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:56.616292    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:56.623488    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:56.624137    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:57.115655    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:57.115725    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:57.115725    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:57.115725    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:57.120582    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:30:57.615765    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:57.615765    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:57.615765    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:57.615765    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:57.621995    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:58.120219    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:58.120262    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:58.120262    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:58.120326    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:58.132880    3404 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1028 11:30:58.614957    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:58.614957    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:58.614957    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:58.614957    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:58.620921    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:30:59.115558    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:59.115682    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:59.115682    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:59.115682    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:59.121956    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:30:59.122523    3404 node_ready.go:53] node "ha-201400-m02" has status "Ready":"False"
	I1028 11:30:59.615523    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:30:59.615523    3404 round_trippers.go:469] Request Headers:
	I1028 11:30:59.615523    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:30:59.615523    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:30:59.630384    3404 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1028 11:31:00.115131    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:00.115131    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.115131    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.115131    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.124848    3404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:31:00.125775    3404 node_ready.go:49] node "ha-201400-m02" has status "Ready":"True"
	I1028 11:31:00.125942    3404 node_ready.go:38] duration metric: took 21.0108928s for node "ha-201400-m02" to be "Ready" ...
	I1028 11:31:00.125942    3404 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:31:00.126143    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:31:00.126192    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.126192    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.126192    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.152864    3404 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I1028 11:31:00.163453    3404 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n2qnf" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.164454    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-n2qnf
	I1028 11:31:00.164454    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.164454    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.164454    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.177756    3404 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1028 11:31:00.178520    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:00.178520    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.178604    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.178604    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.191578    3404 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1028 11:31:00.192605    3404 pod_ready.go:93] pod "coredns-7c65d6cfc9-n2qnf" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:00.192605    3404 pod_ready.go:82] duration metric: took 28.151ms for pod "coredns-7c65d6cfc9-n2qnf" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.192681    3404 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zt6f6" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.192832    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zt6f6
	I1028 11:31:00.192987    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.193094    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.193094    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.204089    3404 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1028 11:31:00.205024    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:00.206022    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.206022    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.206022    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.211751    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:31:00.212651    3404 pod_ready.go:93] pod "coredns-7c65d6cfc9-zt6f6" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:00.212651    3404 pod_ready.go:82] duration metric: took 19.9695ms for pod "coredns-7c65d6cfc9-zt6f6" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.212709    3404 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.212813    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-201400
	I1028 11:31:00.212907    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.212907    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.212907    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.219646    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:00.220417    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:00.220417    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.220417    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.220417    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.225034    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:31:00.226022    3404 pod_ready.go:93] pod "etcd-ha-201400" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:00.226022    3404 pod_ready.go:82] duration metric: took 13.3136ms for pod "etcd-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.226022    3404 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.226022    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-201400-m02
	I1028 11:31:00.226022    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.226022    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.226022    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.235103    3404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:31:00.235516    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:00.235516    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.235516    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.235516    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.242125    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:31:00.242125    3404 pod_ready.go:93] pod "etcd-ha-201400-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:00.242125    3404 pod_ready.go:82] duration metric: took 16.102ms for pod "etcd-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.242125    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.316779    3404 request.go:632] Waited for 74.654ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400
	I1028 11:31:00.317237    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400
	I1028 11:31:00.317299    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.317339    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.317374    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.323256    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:31:00.515618    3404 request.go:632] Waited for 191.5293ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:00.515958    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:00.515958    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.515958    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.515958    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.521230    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:31:00.521892    3404 pod_ready.go:93] pod "kube-apiserver-ha-201400" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:00.521892    3404 pod_ready.go:82] duration metric: took 279.7641ms for pod "kube-apiserver-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.521892    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.715606    3404 request.go:632] Waited for 193.3382ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400-m02
	I1028 11:31:00.716247    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400-m02
	I1028 11:31:00.716247    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.716247    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.716334    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.723095    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:00.915578    3404 request.go:632] Waited for 190.5016ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:00.915578    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:00.915578    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:00.915578    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:00.915578    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:00.921253    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:31:00.921970    3404 pod_ready.go:93] pod "kube-apiserver-ha-201400-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:00.922060    3404 pod_ready.go:82] duration metric: took 400.1634ms for pod "kube-apiserver-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:00.922060    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:01.115419    3404 request.go:632] Waited for 193.2129ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400
	I1028 11:31:01.115419    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400
	I1028 11:31:01.115419    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:01.115419    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:01.115419    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:01.122361    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:01.316005    3404 request.go:632] Waited for 192.3075ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:01.316525    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:01.316525    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:01.316525    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:01.316525    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:01.321710    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:31:01.322514    3404 pod_ready.go:93] pod "kube-controller-manager-ha-201400" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:01.322514    3404 pod_ready.go:82] duration metric: took 400.4494ms for pod "kube-controller-manager-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:01.322601    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:01.516314    3404 request.go:632] Waited for 193.6428ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400-m02
	I1028 11:31:01.516314    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400-m02
	I1028 11:31:01.516314    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:01.516314    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:01.516314    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:01.522527    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:01.715186    3404 request.go:632] Waited for 191.5876ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:01.715186    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:01.715186    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:01.715186    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:01.715186    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:01.722488    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:31:01.726074    3404 pod_ready.go:93] pod "kube-controller-manager-ha-201400-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:01.727347    3404 pod_ready.go:82] duration metric: took 404.7413ms for pod "kube-controller-manager-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:01.727347    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fg4c7" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:01.916212    3404 request.go:632] Waited for 188.6432ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fg4c7
	I1028 11:31:01.916756    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fg4c7
	I1028 11:31:01.916791    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:01.916791    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:01.916791    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:01.923093    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:02.116099    3404 request.go:632] Waited for 191.9217ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:02.116099    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:02.116099    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:02.116099    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:02.116099    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:02.122622    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:02.122700    3404 pod_ready.go:93] pod "kube-proxy-fg4c7" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:02.123247    3404 pod_ready.go:82] duration metric: took 395.8954ms for pod "kube-proxy-fg4c7" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:02.123247    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hkdzx" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:02.316767    3404 request.go:632] Waited for 193.5174ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkdzx
	I1028 11:31:02.317309    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkdzx
	I1028 11:31:02.317365    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:02.317365    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:02.317365    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:02.333733    3404 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1028 11:31:02.516083    3404 request.go:632] Waited for 181.1833ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:02.516083    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:02.516541    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:02.516541    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:02.516541    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:02.522426    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:31:02.523224    3404 pod_ready.go:93] pod "kube-proxy-hkdzx" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:02.523305    3404 pod_ready.go:82] duration metric: took 400.0532ms for pod "kube-proxy-hkdzx" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:02.523305    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:02.716046    3404 request.go:632] Waited for 192.5752ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400
	I1028 11:31:02.716046    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400
	I1028 11:31:02.716046    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:02.716046    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:02.716046    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:02.722716    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:02.915902    3404 request.go:632] Waited for 192.3158ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:02.915902    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:31:02.915902    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:02.915902    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:02.915902    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:02.926561    3404 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1028 11:31:02.927743    3404 pod_ready.go:93] pod "kube-scheduler-ha-201400" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:02.927743    3404 pod_ready.go:82] duration metric: took 404.4339ms for pod "kube-scheduler-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:02.927866    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:03.116036    3404 request.go:632] Waited for 188.1673ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400-m02
	I1028 11:31:03.116036    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400-m02
	I1028 11:31:03.116036    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:03.116036    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:03.116036    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:03.119621    3404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:31:03.318460    3404 request.go:632] Waited for 198.8366ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:03.318460    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:31:03.318460    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:03.318460    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:03.318460    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:03.325539    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:31:03.326336    3404 pod_ready.go:93] pod "kube-scheduler-ha-201400-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:31:03.326336    3404 pod_ready.go:82] duration metric: took 398.4655ms for pod "kube-scheduler-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:31:03.326336    3404 pod_ready.go:39] duration metric: took 3.2002865s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:31:03.326336    3404 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:31:03.339054    3404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:31:03.367679    3404 api_server.go:72] duration metric: took 24.7852027s to wait for apiserver process to appear ...
	I1028 11:31:03.367679    3404 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:31:03.367679    3404 api_server.go:253] Checking apiserver healthz at https://172.27.248.250:8443/healthz ...
	I1028 11:31:03.378574    3404 api_server.go:279] https://172.27.248.250:8443/healthz returned 200:
	ok
	I1028 11:31:03.378574    3404 round_trippers.go:463] GET https://172.27.248.250:8443/version
	I1028 11:31:03.378574    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:03.378574    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:03.378574    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:03.380673    3404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:31:03.380929    3404 api_server.go:141] control plane version: v1.31.2
	I1028 11:31:03.380929    3404 api_server.go:131] duration metric: took 13.25ms to wait for apiserver health ...
	I1028 11:31:03.381001    3404 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:31:03.515762    3404 request.go:632] Waited for 134.707ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:31:03.516178    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:31:03.516178    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:03.516178    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:03.516274    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:03.528080    3404 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1028 11:31:03.535230    3404 system_pods.go:59] 17 kube-system pods found
	I1028 11:31:03.535313    3404 system_pods.go:61] "coredns-7c65d6cfc9-n2qnf" [0a59b0c9-3860-43bb-9a01-6de060a0d8ec] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "coredns-7c65d6cfc9-zt6f6" [5670c78d-aab6-44c3-8f4a-09c21d9844fb] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "etcd-ha-201400" [54f8addb-a080-4679-b37b-561992049222] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "etcd-ha-201400-m02" [5928b40b-449f-4535-93ca-ecdcc4aad10c] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "kindnet-cwkwx" [dee24e8e-11df-4371-bcc4-036802ed78f7] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "kindnet-d99h6" [46efe528-d003-4a59-b6a3-f76548c4c236] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "kube-apiserver-ha-201400" [b77b449a-eea7-4ac8-a482-01d4405aaff1] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "kube-apiserver-ha-201400-m02" [f1405ed9-f514-4db2-acb3-f958193d27d4] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "kube-controller-manager-ha-201400" [bbfa9c31-e00f-4f74-97d0-d2b617a99bda] Running
	I1028 11:31:03.535390    3404 system_pods.go:61] "kube-controller-manager-ha-201400-m02" [bf76a2dc-b766-4642-92e4-15817852695d] Running
	I1028 11:31:03.535450    3404 system_pods.go:61] "kube-proxy-fg4c7" [a66dd88b-ed1d-4de7-b8cd-fec1b0e3b7c4] Running
	I1028 11:31:03.535450    3404 system_pods.go:61] "kube-proxy-hkdzx" [9c126bc0-db8a-442f-8e5a-a3cff3771d84] Running
	I1028 11:31:03.535491    3404 system_pods.go:61] "kube-scheduler-ha-201400" [942de188-c1af-4430-b6d5-0eada56b52a7] Running
	I1028 11:31:03.535491    3404 system_pods.go:61] "kube-scheduler-ha-201400-m02" [fcf968aa-fd96-4530-8f89-09d508d9a0a4] Running
	I1028 11:31:03.535491    3404 system_pods.go:61] "kube-vip-ha-201400" [01349bec-7565-423f-b7a3-f4901c3aac8c] Running
	I1028 11:31:03.535491    3404 system_pods.go:61] "kube-vip-ha-201400-m02" [b9349828-0d42-41c9-a291-1147a6c5e426] Running
	I1028 11:31:03.535491    3404 system_pods.go:61] "storage-provisioner" [3610335c-bd06-440a-b518-cf74dd5af220] Running
	I1028 11:31:03.535491    3404 system_pods.go:74] duration metric: took 154.4879ms to wait for pod list to return data ...
	I1028 11:31:03.535585    3404 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:31:03.715972    3404 request.go:632] Waited for 180.3188ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:31:03.716386    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:31:03.716493    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:03.716493    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:03.716493    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:03.723428    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:03.723491    3404 default_sa.go:45] found service account: "default"
	I1028 11:31:03.723491    3404 default_sa.go:55] duration metric: took 187.9036ms for default service account to be created ...
	I1028 11:31:03.723491    3404 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:31:03.915533    3404 request.go:632] Waited for 192.0396ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:31:03.915533    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:31:03.915533    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:03.915533    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:03.915533    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:03.932331    3404 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1028 11:31:03.939800    3404 system_pods.go:86] 17 kube-system pods found
	I1028 11:31:03.939883    3404 system_pods.go:89] "coredns-7c65d6cfc9-n2qnf" [0a59b0c9-3860-43bb-9a01-6de060a0d8ec] Running
	I1028 11:31:03.939883    3404 system_pods.go:89] "coredns-7c65d6cfc9-zt6f6" [5670c78d-aab6-44c3-8f4a-09c21d9844fb] Running
	I1028 11:31:03.939883    3404 system_pods.go:89] "etcd-ha-201400" [54f8addb-a080-4679-b37b-561992049222] Running
	I1028 11:31:03.939883    3404 system_pods.go:89] "etcd-ha-201400-m02" [5928b40b-449f-4535-93ca-ecdcc4aad10c] Running
	I1028 11:31:03.939883    3404 system_pods.go:89] "kindnet-cwkwx" [dee24e8e-11df-4371-bcc4-036802ed78f7] Running
	I1028 11:31:03.939883    3404 system_pods.go:89] "kindnet-d99h6" [46efe528-d003-4a59-b6a3-f76548c4c236] Running
	I1028 11:31:03.939883    3404 system_pods.go:89] "kube-apiserver-ha-201400" [b77b449a-eea7-4ac8-a482-01d4405aaff1] Running
	I1028 11:31:03.939883    3404 system_pods.go:89] "kube-apiserver-ha-201400-m02" [f1405ed9-f514-4db2-acb3-f958193d27d4] Running
	I1028 11:31:03.939944    3404 system_pods.go:89] "kube-controller-manager-ha-201400" [bbfa9c31-e00f-4f74-97d0-d2b617a99bda] Running
	I1028 11:31:03.940052    3404 system_pods.go:89] "kube-controller-manager-ha-201400-m02" [bf76a2dc-b766-4642-92e4-15817852695d] Running
	I1028 11:31:03.940152    3404 system_pods.go:89] "kube-proxy-fg4c7" [a66dd88b-ed1d-4de7-b8cd-fec1b0e3b7c4] Running
	I1028 11:31:03.940152    3404 system_pods.go:89] "kube-proxy-hkdzx" [9c126bc0-db8a-442f-8e5a-a3cff3771d84] Running
	I1028 11:31:03.940152    3404 system_pods.go:89] "kube-scheduler-ha-201400" [942de188-c1af-4430-b6d5-0eada56b52a7] Running
	I1028 11:31:03.940152    3404 system_pods.go:89] "kube-scheduler-ha-201400-m02" [fcf968aa-fd96-4530-8f89-09d508d9a0a4] Running
	I1028 11:31:03.940152    3404 system_pods.go:89] "kube-vip-ha-201400" [01349bec-7565-423f-b7a3-f4901c3aac8c] Running
	I1028 11:31:03.940206    3404 system_pods.go:89] "kube-vip-ha-201400-m02" [b9349828-0d42-41c9-a291-1147a6c5e426] Running
	I1028 11:31:03.940206    3404 system_pods.go:89] "storage-provisioner" [3610335c-bd06-440a-b518-cf74dd5af220] Running
	I1028 11:31:03.940206    3404 system_pods.go:126] duration metric: took 216.7132ms to wait for k8s-apps to be running ...
	I1028 11:31:03.940206    3404 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:31:03.951195    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:31:03.982483    3404 system_svc.go:56] duration metric: took 42.2764ms WaitForService to wait for kubelet
	I1028 11:31:03.982483    3404 kubeadm.go:582] duration metric: took 25.3999999s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:31:03.982734    3404 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:31:04.115817    3404 request.go:632] Waited for 133.0502ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes
	I1028 11:31:04.115817    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes
	I1028 11:31:04.115817    3404 round_trippers.go:469] Request Headers:
	I1028 11:31:04.115817    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:31:04.115817    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:31:04.122450    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:31:04.123721    3404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:31:04.123721    3404 node_conditions.go:123] node cpu capacity is 2
	I1028 11:31:04.123870    3404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:31:04.123870    3404 node_conditions.go:123] node cpu capacity is 2
	I1028 11:31:04.123870    3404 node_conditions.go:105] duration metric: took 141.1349ms to run NodePressure ...
	I1028 11:31:04.123920    3404 start.go:241] waiting for startup goroutines ...
	I1028 11:31:04.123920    3404 start.go:255] writing updated cluster config ...
	I1028 11:31:04.128708    3404 out.go:201] 
	I1028 11:31:04.146040    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:31:04.146287    3404 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json ...
	I1028 11:31:04.156971    3404 out.go:177] * Starting "ha-201400-m03" control-plane node in "ha-201400" cluster
	I1028 11:31:04.159815    3404 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 11:31:04.159948    3404 cache.go:56] Caching tarball of preloaded images
	I1028 11:31:04.160277    3404 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1028 11:31:04.160277    3404 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 11:31:04.160277    3404 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json ...
	I1028 11:31:04.165639    3404 start.go:360] acquireMachinesLock for ha-201400-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:31:04.165870    3404 start.go:364] duration metric: took 231.6µs to acquireMachinesLock for "ha-201400-m03"
	I1028 11:31:04.166208    3404 start.go:93] Provisioning new machine with config: &{Name:ha-201400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.2 ClusterName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.248.250 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.250.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false in
gress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:31:04.166322    3404 start.go:125] createHost starting for "m03" (driver="hyperv")
	I1028 11:31:04.171180    3404 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:31:04.171180    3404 start.go:159] libmachine.API.Create for "ha-201400" (driver="hyperv")
	I1028 11:31:04.171180    3404 client.go:168] LocalClient.Create starting
	I1028 11:31:04.172282    3404 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I1028 11:31:04.172569    3404 main.go:141] libmachine: Decoding PEM data...
	I1028 11:31:04.172569    3404 main.go:141] libmachine: Parsing certificate...
	I1028 11:31:04.172796    3404 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I1028 11:31:04.172984    3404 main.go:141] libmachine: Decoding PEM data...
	I1028 11:31:04.173077    3404 main.go:141] libmachine: Parsing certificate...
	I1028 11:31:04.173142    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1028 11:31:06.273171    3404 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1028 11:31:06.273171    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:06.273171    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1028 11:31:08.191092    3404 main.go:141] libmachine: [stdout =====>] : False
	
	I1028 11:31:08.191092    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:08.192052    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1028 11:31:09.805432    3404 main.go:141] libmachine: [stdout =====>] : True
	
	I1028 11:31:09.805432    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:09.805432    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1028 11:31:13.798323    3404 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1028 11:31:13.798323    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:13.800854    3404 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:31:14.329718    3404 main.go:141] libmachine: Creating SSH key...
	I1028 11:31:14.487405    3404 main.go:141] libmachine: Creating VM...
	I1028 11:31:14.487405    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1028 11:31:17.646183    3404 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1028 11:31:17.647119    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:17.647188    3404 main.go:141] libmachine: Using switch "Default Switch"
	I1028 11:31:17.647216    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1028 11:31:19.593763    3404 main.go:141] libmachine: [stdout =====>] : True
	
	I1028 11:31:19.594759    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:19.594806    3404 main.go:141] libmachine: Creating VHD
	I1028 11:31:19.594950    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I1028 11:31:23.482772    3404 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E3384BBC-27C0-454C-978E-068E6868F243
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1028 11:31:23.483638    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:23.483638    3404 main.go:141] libmachine: Writing magic tar header
	I1028 11:31:23.483900    3404 main.go:141] libmachine: Writing SSH key tar header
	I1028 11:31:23.495586    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I1028 11:31:26.834194    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:26.835007    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:26.835260    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\disk.vhd' -SizeBytes 20000MB
	I1028 11:31:29.590992    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:29.591346    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:29.591527    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-201400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1028 11:31:33.440822    3404 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-201400-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1028 11:31:33.440907    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:33.440907    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-201400-m03 -DynamicMemoryEnabled $false
	I1028 11:31:35.929563    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:35.929563    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:35.929563    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-201400-m03 -Count 2
	I1028 11:31:38.278858    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:38.279463    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:38.279463    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-201400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\boot2docker.iso'
	I1028 11:31:41.011172    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:41.011172    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:41.011172    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-201400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\disk.vhd'
	I1028 11:31:43.883810    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:43.883810    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:43.883810    3404 main.go:141] libmachine: Starting VM...
	I1028 11:31:43.883810    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-201400-m03
	I1028 11:31:47.163638    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:47.163638    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:47.164115    3404 main.go:141] libmachine: Waiting for host to start...
	I1028 11:31:47.164176    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:31:49.631032    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:31:49.631032    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:49.631911    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:31:52.311475    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:52.311766    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:53.312451    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:31:55.722707    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:31:55.723169    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:55.723360    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:31:58.424042    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:31:58.424433    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:31:59.424938    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:01.813681    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:01.814283    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:01.814419    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:04.525342    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:32:04.525342    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:05.525933    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:07.936429    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:07.936429    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:07.936429    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:10.638434    3404 main.go:141] libmachine: [stdout =====>] : 
	I1028 11:32:10.638434    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:11.640547    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:14.022562    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:14.022649    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:14.022649    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:16.819611    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:32:16.819611    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:16.820395    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:19.079573    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:19.079643    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:19.079704    3404 machine.go:93] provisionDockerMachine start ...
	I1028 11:32:19.079824    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:21.410052    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:21.410115    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:21.410115    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:24.153066    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:32:24.153066    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:24.159575    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:32:24.173058    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.254.230 22 <nil> <nil>}
	I1028 11:32:24.173058    3404 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 11:32:24.298523    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 11:32:24.298523    3404 buildroot.go:166] provisioning hostname "ha-201400-m03"
	I1028 11:32:24.298643    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:26.604882    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:26.604882    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:26.604882    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:29.335617    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:32:29.335774    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:29.342709    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:32:29.342785    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.254.230 22 <nil> <nil>}
	I1028 11:32:29.343519    3404 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-201400-m03 && echo "ha-201400-m03" | sudo tee /etc/hostname
	I1028 11:32:29.491741    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-201400-m03
	
	I1028 11:32:29.491741    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:31.813044    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:31.813121    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:31.813187    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:34.580181    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:32:34.580243    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:34.585735    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:32:34.586285    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.254.230 22 <nil> <nil>}
	I1028 11:32:34.586348    3404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-201400-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-201400-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-201400-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:32:34.739085    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:32:34.739158    3404 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I1028 11:32:34.739158    3404 buildroot.go:174] setting up certificates
	I1028 11:32:34.739236    3404 provision.go:84] configureAuth start
	I1028 11:32:34.739236    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:37.038417    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:37.038629    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:37.038629    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:39.850893    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:32:39.851830    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:39.851921    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:42.203798    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:42.203882    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:42.203882    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:44.974041    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:32:44.974041    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:44.974180    3404 provision.go:143] copyHostCerts
	I1028 11:32:44.974325    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I1028 11:32:44.974679    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I1028 11:32:44.974821    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I1028 11:32:44.975260    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1028 11:32:44.977021    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I1028 11:32:44.977422    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I1028 11:32:44.977487    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I1028 11:32:44.978016    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1028 11:32:44.978685    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I1028 11:32:44.979373    3404 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I1028 11:32:44.979373    3404 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I1028 11:32:44.979772    3404 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I1028 11:32:44.981035    3404 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-201400-m03 san=[127.0.0.1 172.27.254.230 ha-201400-m03 localhost minikube]
	I1028 11:32:45.234548    3404 provision.go:177] copyRemoteCerts
	I1028 11:32:45.245541    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:32:45.245541    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:47.535918    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:47.535918    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:47.536472    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:50.301858    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:32:50.301858    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:50.302089    3404 sshutil.go:53] new ssh client: &{IP:172.27.254.230 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\id_rsa Username:docker}
	I1028 11:32:50.409365    3404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1637653s)
	I1028 11:32:50.409365    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1028 11:32:50.410018    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:32:50.462954    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1028 11:32:50.462954    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:32:50.526104    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1028 11:32:50.526104    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:32:50.583918    3404 provision.go:87] duration metric: took 15.844503s to configureAuth
	I1028 11:32:50.583918    3404 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:32:50.584982    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:32:50.585096    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:52.877277    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:52.877703    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:52.877703    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:32:55.618551    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:32:55.618551    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:55.625107    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:32:55.625643    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.254.230 22 <nil> <nil>}
	I1028 11:32:55.625643    3404 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 11:32:55.753218    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 11:32:55.753388    3404 buildroot.go:70] root file system type: tmpfs
	I1028 11:32:55.753506    3404 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 11:32:55.753605    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:32:58.095514    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:32:58.096206    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:32:58.096320    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:00.865268    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:00.865268    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:00.870598    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:33:00.871041    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.254.230 22 <nil> <nil>}
	I1028 11:33:00.871041    3404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.248.250"
	Environment="NO_PROXY=172.27.248.250,172.27.250.174"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 11:33:01.032161    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.248.250
	Environment=NO_PROXY=172.27.248.250,172.27.250.174
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 11:33:01.032241    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:33:03.317619    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:03.318691    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:03.318959    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:06.117473    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:06.118542    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:06.124212    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:33:06.124739    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.254.230 22 <nil> <nil>}
	I1028 11:33:06.124739    3404 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 11:33:08.409336    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1028 11:33:08.409421    3404 machine.go:96] duration metric: took 49.3291592s to provisionDockerMachine
	I1028 11:33:08.409421    3404 client.go:171] duration metric: took 2m4.2368364s to LocalClient.Create
	I1028 11:33:08.409476    3404 start.go:167] duration metric: took 2m4.2368364s to libmachine.API.Create "ha-201400"
	I1028 11:33:08.409476    3404 start.go:293] postStartSetup for "ha-201400-m03" (driver="hyperv")
	I1028 11:33:08.409514    3404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:33:08.421751    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:33:08.421751    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:33:10.745137    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:10.745641    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:10.745809    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:13.552220    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:13.552220    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:13.552220    3404 sshutil.go:53] new ssh client: &{IP:172.27.254.230 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\id_rsa Username:docker}
	I1028 11:33:13.665726    3404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2439155s)
	I1028 11:33:13.677860    3404 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:33:13.685488    3404 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:33:13.685488    3404 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I1028 11:33:13.685955    3404 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I1028 11:33:13.687085    3404 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> 96082.pem in /etc/ssl/certs
	I1028 11:33:13.687085    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /etc/ssl/certs/96082.pem
	I1028 11:33:13.702423    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:33:13.724872    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /etc/ssl/certs/96082.pem (1708 bytes)
	I1028 11:33:13.773674    3404 start.go:296] duration metric: took 5.3641002s for postStartSetup
	I1028 11:33:13.777321    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:33:16.097507    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:16.097507    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:16.098321    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:18.857782    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:18.858123    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:18.858381    3404 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\config.json ...
	I1028 11:33:18.860884    3404 start.go:128] duration metric: took 2m14.6930397s to createHost
	I1028 11:33:18.861003    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:33:21.218571    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:21.218571    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:21.218571    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:24.007550    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:24.008397    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:24.014031    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:33:24.014621    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.254.230 22 <nil> <nil>}
	I1028 11:33:24.014732    3404 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:33:24.146166    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730115204.160216981
	
	I1028 11:33:24.146166    3404 fix.go:216] guest clock: 1730115204.160216981
	I1028 11:33:24.146166    3404 fix.go:229] Guest: 2024-10-28 11:33:24.160216981 +0000 UTC Remote: 2024-10-28 11:33:18.8610034 +0000 UTC m=+595.009783801 (delta=5.299213581s)
	I1028 11:33:24.146274    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:33:26.492581    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:26.493667    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:26.493667    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:29.198781    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:29.198781    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:29.206252    3404 main.go:141] libmachine: Using SSH client type: native
	I1028 11:33:29.206870    3404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.254.230 22 <nil> <nil>}
	I1028 11:33:29.206870    3404 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1730115204
	I1028 11:33:29.342901    3404 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 28 11:33:24 UTC 2024
	
	I1028 11:33:29.342901    3404 fix.go:236] clock set: Mon Oct 28 11:33:24 UTC 2024
	 (err=<nil>)
	I1028 11:33:29.342901    3404 start.go:83] releasing machines lock for "ha-201400-m03", held for 2m25.1753903s
	I1028 11:33:29.343199    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:33:31.658432    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:31.658727    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:31.658862    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:34.447452    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:34.447510    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:34.450031    3404 out.go:177] * Found network options:
	I1028 11:33:34.452942    3404 out.go:177]   - NO_PROXY=172.27.248.250,172.27.250.174
	W1028 11:33:34.455757    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:33:34.455757    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:33:34.458824    3404 out.go:177]   - NO_PROXY=172.27.248.250,172.27.250.174
	W1028 11:33:34.461441    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:33:34.461441    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:33:34.462809    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:33:34.462931    3404 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:33:34.465524    3404 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1028 11:33:34.465721    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:33:34.475804    3404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 11:33:34.475804    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400-m03 ).state
	I1028 11:33:36.934037    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:36.934216    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:36.934216    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:36.934216    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:36.934216    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:36.934216    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400-m03 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:39.690303    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:39.690303    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:39.690997    3404 sshutil.go:53] new ssh client: &{IP:172.27.254.230 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\id_rsa Username:docker}
	I1028 11:33:39.716116    3404 main.go:141] libmachine: [stdout =====>] : 172.27.254.230
	
	I1028 11:33:39.716743    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:39.717180    3404 sshutil.go:53] new ssh client: &{IP:172.27.254.230 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400-m03\id_rsa Username:docker}
	I1028 11:33:39.785959    3404 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3100944s)
	W1028 11:33:39.785959    3404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:33:39.800048    3404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:33:39.804923    3404 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.3393389s)
	W1028 11:33:39.804923    3404 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1028 11:33:39.837547    3404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:33:39.837631    3404 start.go:495] detecting cgroup driver to use...
	I1028 11:33:39.838028    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:33:39.892216    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1028 11:33:39.925070    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1028 11:33:39.926067    3404 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1028 11:33:39.926067    3404 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1028 11:33:39.953734    3404 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 11:33:39.966683    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 11:33:40.013414    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:33:40.055969    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 11:33:40.095976    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:33:40.130789    3404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:33:40.164671    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 11:33:40.198078    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 11:33:40.233431    3404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 11:33:40.273347    3404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:33:40.295621    3404 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:33:40.307268    3404 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:33:40.340872    3404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:33:40.378111    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:33:40.596824    3404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 11:33:40.637766    3404 start.go:495] detecting cgroup driver to use...
	I1028 11:33:40.650319    3404 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 11:33:40.688745    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:33:40.723752    3404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:33:40.771046    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:33:40.808497    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 11:33:40.845526    3404 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1028 11:33:40.914069    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 11:33:40.940742    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:33:40.991970    3404 ssh_runner.go:195] Run: which cri-dockerd
	I1028 11:33:41.012187    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 11:33:41.033575    3404 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1028 11:33:41.086429    3404 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 11:33:41.298370    3404 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 11:33:41.493395    3404 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 11:33:41.493395    3404 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 11:33:41.541385    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:33:41.756572    3404 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 11:33:44.368538    3404 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6119366s)
	I1028 11:33:44.381312    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 11:33:44.419933    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 11:33:44.458482    3404 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 11:33:44.677491    3404 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 11:33:44.896287    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:33:45.114281    3404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 11:33:45.158661    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 11:33:45.196760    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:33:45.412812    3404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 11:33:45.536554    3404 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 11:33:45.548984    3404 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 11:33:45.557701    3404 start.go:563] Will wait 60s for crictl version
	I1028 11:33:45.572716    3404 ssh_runner.go:195] Run: which crictl
	I1028 11:33:45.590540    3404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:33:45.655302    3404 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1028 11:33:45.666715    3404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 11:33:45.712150    3404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 11:33:45.748269    3404 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1028 11:33:45.751257    3404 out.go:177]   - env NO_PROXY=172.27.248.250
	I1028 11:33:45.754256    3404 out.go:177]   - env NO_PROXY=172.27.248.250,172.27.250.174
	I1028 11:33:45.756312    3404 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1028 11:33:45.761261    3404 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1028 11:33:45.761261    3404 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1028 11:33:45.761261    3404 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1028 11:33:45.761261    3404 ip.go:211] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:23:7f:d2 Flags:up|broadcast|multicast|running}
	I1028 11:33:45.764324    3404 ip.go:214] interface addr: fe80::866e:dfb9:193a:741e/64
	I1028 11:33:45.764324    3404 ip.go:214] interface addr: 172.27.240.1/20
	I1028 11:33:45.775302    3404 ssh_runner.go:195] Run: grep 172.27.240.1	host.minikube.internal$ /etc/hosts
	I1028 11:33:45.782770    3404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:33:45.806556    3404 mustload.go:65] Loading cluster: ha-201400
	I1028 11:33:45.807319    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:33:45.807868    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:33:48.100577    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:48.100635    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:48.100635    3404 host.go:66] Checking if "ha-201400" exists ...
	I1028 11:33:48.101239    3404 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400 for IP: 172.27.254.230
	I1028 11:33:48.101239    3404 certs.go:194] generating shared ca certs ...
	I1028 11:33:48.101239    3404 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:33:48.101847    3404 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I1028 11:33:48.101847    3404 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I1028 11:33:48.102543    3404 certs.go:256] generating profile certs ...
	I1028 11:33:48.103163    3404 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\client.key
	I1028 11:33:48.103393    3404 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.1a357288
	I1028 11:33:48.103393    3404 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.1a357288 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.248.250 172.27.250.174 172.27.254.230 172.27.255.254]
	I1028 11:33:48.237615    3404 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.1a357288 ...
	I1028 11:33:48.237615    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.1a357288: {Name:mkc46df1f9e0c76e7c9cb770a4a5c629941349cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:33:48.239446    3404 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.1a357288 ...
	I1028 11:33:48.239446    3404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.1a357288: {Name:mk5457568e279a9532b182a66e070be2b509e809 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:33:48.239893    3404 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt.1a357288 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt
	I1028 11:33:48.256003    3404 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key.1a357288 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key
	I1028 11:33:48.257480    3404 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key
	I1028 11:33:48.257480    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:33:48.257735    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:33:48.257735    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:33:48.257735    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:33:48.257735    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:33:48.258326    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:33:48.258438    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:33:48.258438    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:33:48.259073    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem (1338 bytes)
	W1028 11:33:48.259101    3404 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608_empty.pem, impossibly tiny 0 bytes
	I1028 11:33:48.259101    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1028 11:33:48.259838    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1028 11:33:48.259838    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1028 11:33:48.259838    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1028 11:33:48.260602    3404 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem (1708 bytes)
	I1028 11:33:48.260602    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:33:48.261292    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem -> /usr/share/ca-certificates/9608.pem
	I1028 11:33:48.261292    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /usr/share/ca-certificates/96082.pem
	I1028 11:33:48.261292    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:33:50.582755    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:50.582755    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:50.582755    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:33:53.355928    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:33:53.355984    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:53.355984    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:33:53.450105    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:33:53.458269    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:33:53.492453    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:33:53.502207    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 11:33:53.537891    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:33:53.544824    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:33:53.579704    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:33:53.586100    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:33:53.619050    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:33:53.628633    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:33:53.665424    3404 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:33:53.672731    3404 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:33:53.694377    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:33:53.745142    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:33:53.796384    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:33:53.845752    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:33:53.895212    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1028 11:33:53.945245    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:33:53.994300    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:33:54.050528    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-201400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:33:54.106771    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:33:54.157919    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem --> /usr/share/ca-certificates/9608.pem (1338 bytes)
	I1028 11:33:54.207862    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /usr/share/ca-certificates/96082.pem (1708 bytes)
	I1028 11:33:54.257434    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:33:54.290143    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 11:33:54.324751    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:33:54.359925    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:33:54.394481    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:33:54.430621    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:33:54.467028    3404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:33:54.515503    3404 ssh_runner.go:195] Run: openssl version
	I1028 11:33:54.537847    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:33:54.575680    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:33:54.585089    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:33:54.597334    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:33:54.619521    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:33:54.654899    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9608.pem && ln -fs /usr/share/ca-certificates/9608.pem /etc/ssl/certs/9608.pem"
	I1028 11:33:54.688027    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9608.pem
	I1028 11:33:54.695905    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:03 /usr/share/ca-certificates/9608.pem
	I1028 11:33:54.709155    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9608.pem
	I1028 11:33:54.730855    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9608.pem /etc/ssl/certs/51391683.0"
	I1028 11:33:54.764127    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96082.pem && ln -fs /usr/share/ca-certificates/96082.pem /etc/ssl/certs/96082.pem"
	I1028 11:33:54.798539    3404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96082.pem
	I1028 11:33:54.805935    3404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:03 /usr/share/ca-certificates/96082.pem
	I1028 11:33:54.819515    3404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96082.pem
	I1028 11:33:54.840084    3404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96082.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:33:54.870713    3404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:33:54.877062    3404 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:33:54.877362    3404 kubeadm.go:934] updating node {m03 172.27.254.230 8443 v1.31.2 docker true true} ...
	I1028 11:33:54.877607    3404 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-201400-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.254.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:33:54.877607    3404 kube-vip.go:115] generating kube-vip config ...
	I1028 11:33:54.891233    3404 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:33:54.919058    3404 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:33:54.919250    3404 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:33:54.930646    3404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:33:54.948760    3404 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:33:54.959651    3404 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:33:54.982077    3404 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1028 11:33:54.982262    3404 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1028 11:33:54.982371    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:33:54.982077    3404 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:33:54.982650    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:33:54.996719    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:33:54.996719    3404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:33:54.998584    3404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:33:55.024045    3404 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:33:55.024045    3404 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:33:55.024045    3404 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:33:55.024045    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:33:55.024045    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:33:55.041817    3404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:33:55.106434    3404 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:33:55.106497    3404 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:33:56.375413    3404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:33:56.396302    3404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1028 11:33:56.431359    3404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:33:56.466594    3404 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:33:56.518086    3404 ssh_runner.go:195] Run: grep 172.27.255.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:33:56.526079    3404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:33:56.563493    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:33:56.776882    3404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:33:56.813597    3404 host.go:66] Checking if "ha-201400" exists ...
	I1028 11:33:56.813859    3404 start.go:317] joinCluster: &{Name:ha-201400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-201400 Namespace:default APIServerHAVIP:172.27.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.248.250 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.250.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.27.254.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:33:56.814651    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:33:56.814721    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-201400 ).state
	I1028 11:33:59.109066    3404 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 11:33:59.109066    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:33:59.110037    3404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-201400 ).networkadapters[0]).ipaddresses[0]
	I1028 11:34:01.858253    3404 main.go:141] libmachine: [stdout =====>] : 172.27.248.250
	
	I1028 11:34:01.859150    3404 main.go:141] libmachine: [stderr =====>] : 
	I1028 11:34:01.859303    3404 sshutil.go:53] new ssh client: &{IP:172.27.248.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-201400\id_rsa Username:docker}
	I1028 11:34:02.079540    3404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2647488s)
	I1028 11:34:02.079955    3404 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.27.254.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:34:02.080193    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qo4296.00kz1cadrxef2kx2 --discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-201400-m03 --control-plane --apiserver-advertise-address=172.27.254.230 --apiserver-bind-port=8443"
	I1028 11:34:49.236475    3404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qo4296.00kz1cadrxef2kx2 --discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-201400-m03 --control-plane --apiserver-advertise-address=172.27.254.230 --apiserver-bind-port=8443": (47.1557509s)
	I1028 11:34:49.237329    3404 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:34:50.027959    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-201400-m03 minikube.k8s.io/updated_at=2024_10_28T11_34_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-201400 minikube.k8s.io/primary=false
	I1028 11:34:50.251828    3404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-201400-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:34:50.555605    3404 start.go:319] duration metric: took 53.7411393s to joinCluster
	I1028 11:34:50.555605    3404 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.27.254.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:34:50.557111    3404 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:34:50.563179    3404 out.go:177] * Verifying Kubernetes components...
	I1028 11:34:50.578099    3404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:34:50.981726    3404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:34:51.035312    3404 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 11:34:51.036250    3404 kapi.go:59] client config for ha-201400: &rest.Config{Host:"https://172.27.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-201400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-201400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29cb3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:34:51.036462    3404 kubeadm.go:483] Overriding stale ClientConfig host https://172.27.255.254:8443 with https://172.27.248.250:8443
	I1028 11:34:51.037392    3404 node_ready.go:35] waiting up to 6m0s for node "ha-201400-m03" to be "Ready" ...
	I1028 11:34:51.037598    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:51.037698    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:51.037728    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:51.037728    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:51.055566    3404 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1028 11:34:51.537885    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:51.537885    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:51.537885    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:51.537885    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:51.545795    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:34:52.038169    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:52.038169    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:52.038169    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:52.038169    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:52.044629    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:34:52.537641    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:52.537641    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:52.537641    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:52.537641    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:52.543524    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:34:53.037673    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:53.037673    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:53.037673    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:53.037673    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:53.050242    3404 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1028 11:34:53.051320    3404 node_ready.go:53] node "ha-201400-m03" has status "Ready":"False"
	I1028 11:34:53.538424    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:53.538424    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:53.538424    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:53.538424    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:53.543934    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:34:54.038641    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:54.038641    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:54.038641    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:54.038641    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:54.218651    3404 round_trippers.go:574] Response Status: 200 OK in 179 milliseconds
	I1028 11:34:54.538507    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:54.538507    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:54.538507    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:54.538507    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:54.544637    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:34:55.039565    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:55.039606    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:55.039606    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:55.039606    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:55.050208    3404 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1028 11:34:55.538548    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:55.538548    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:55.538548    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:55.538548    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:55.567243    3404 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I1028 11:34:55.568924    3404 node_ready.go:53] node "ha-201400-m03" has status "Ready":"False"
	I1028 11:34:56.038450    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:56.038808    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:56.038808    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:56.038808    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:56.047279    3404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 11:34:56.538275    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:56.538275    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:56.538275    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:56.538409    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:56.543956    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:34:57.037935    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:57.037935    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:57.037935    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:57.037935    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:57.047668    3404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:34:57.538351    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:57.538438    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:57.538438    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:57.538503    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:57.544549    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:34:58.037604    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:58.037604    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:58.037604    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:58.037604    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:58.043811    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:34:58.044727    3404 node_ready.go:53] node "ha-201400-m03" has status "Ready":"False"
	I1028 11:34:58.537808    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:58.537808    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:58.537808    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:58.537808    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:58.543650    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:34:59.038120    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:59.038120    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:59.038120    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:59.038120    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:59.046121    3404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 11:34:59.538850    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:34:59.538850    3404 round_trippers.go:469] Request Headers:
	I1028 11:34:59.538850    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:34:59.538850    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:34:59.546909    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:35:00.041250    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:00.041250    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:00.041337    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:00.041337    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:00.046024    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:00.046947    3404 node_ready.go:53] node "ha-201400-m03" has status "Ready":"False"
	I1028 11:35:00.539302    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:00.539302    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:00.539404    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:00.539404    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:00.546092    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:01.039808    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:01.040034    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:01.040034    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:01.040034    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:01.046757    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:01.538816    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:01.538816    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:01.538816    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:01.538816    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:01.544407    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:02.038620    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:02.038620    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:02.038620    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:02.038620    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:02.045697    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:35:02.538146    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:02.538146    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:02.538146    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:02.538146    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:02.556496    3404 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1028 11:35:02.557356    3404 node_ready.go:53] node "ha-201400-m03" has status "Ready":"False"
	I1028 11:35:03.038432    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:03.038432    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:03.038432    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:03.038432    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:03.043911    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:03.538734    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:03.538734    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:03.538734    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:03.538734    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:03.545555    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:04.039667    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:04.039667    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:04.039667    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:04.039667    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:04.057570    3404 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1028 11:35:04.538289    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:04.538289    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:04.538379    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:04.538379    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:04.542398    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:05.038857    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:05.038857    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:05.038857    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:05.038857    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:05.045125    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:05.045793    3404 node_ready.go:53] node "ha-201400-m03" has status "Ready":"False"
	I1028 11:35:05.538209    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:05.538209    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:05.538209    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:05.538209    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:05.546733    3404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 11:35:06.042808    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:06.042897    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:06.042897    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:06.042897    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:06.047547    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:06.538969    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:06.538969    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:06.538969    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:06.538969    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:06.544123    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:07.037842    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:07.037842    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:07.037842    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:07.037842    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:07.043618    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:07.538806    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:07.538906    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:07.538906    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:07.538906    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:07.543478    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:07.544943    3404 node_ready.go:53] node "ha-201400-m03" has status "Ready":"False"
	I1028 11:35:08.038469    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:08.038469    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:08.038469    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:08.038469    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:08.045786    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:35:08.538420    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:08.538568    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:08.538568    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:08.538568    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:08.544099    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:09.038231    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:09.038231    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:09.038231    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:09.038231    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:09.044519    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:09.538331    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:09.538331    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:09.538331    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:09.538331    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:09.544804    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:09.545386    3404 node_ready.go:53] node "ha-201400-m03" has status "Ready":"False"
	I1028 11:35:10.039186    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:10.039299    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:10.039299    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:10.039299    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:10.044793    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:10.538738    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:10.538886    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:10.538886    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:10.538886    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:10.544575    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:11.038670    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:11.038771    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.038771    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.038771    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.044854    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:11.045489    3404 node_ready.go:49] node "ha-201400-m03" has status "Ready":"True"
	I1028 11:35:11.045546    3404 node_ready.go:38] duration metric: took 20.0078432s for node "ha-201400-m03" to be "Ready" ...
	I1028 11:35:11.045546    3404 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:35:11.045711    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:35:11.045711    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.045781    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.045781    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.060109    3404 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1028 11:35:11.071132    3404 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n2qnf" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.071132    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-n2qnf
	I1028 11:35:11.071132    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.071132    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.071132    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.075846    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:11.076219    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:11.076219    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.076219    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.076219    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.082440    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:11.083777    3404 pod_ready.go:93] pod "coredns-7c65d6cfc9-n2qnf" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:11.083902    3404 pod_ready.go:82] duration metric: took 12.7699ms for pod "coredns-7c65d6cfc9-n2qnf" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.083902    3404 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zt6f6" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.084065    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zt6f6
	I1028 11:35:11.084065    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.084065    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.084065    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.089992    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:11.091073    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:11.091189    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.091189    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.091189    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.095483    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:11.096840    3404 pod_ready.go:93] pod "coredns-7c65d6cfc9-zt6f6" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:11.096840    3404 pod_ready.go:82] duration metric: took 12.9377ms for pod "coredns-7c65d6cfc9-zt6f6" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.096903    3404 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.096979    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-201400
	I1028 11:35:11.096979    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.096979    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.096979    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.100391    3404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:35:11.101392    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:11.101392    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.101392    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.101392    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.105322    3404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:35:11.106320    3404 pod_ready.go:93] pod "etcd-ha-201400" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:11.106320    3404 pod_ready.go:82] duration metric: took 9.417ms for pod "etcd-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.106320    3404 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.106320    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-201400-m02
	I1028 11:35:11.106320    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.106320    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.106320    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.110525    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:11.111517    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:11.111517    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.111517    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.111517    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.115812    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:11.117574    3404 pod_ready.go:93] pod "etcd-ha-201400-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:11.117632    3404 pod_ready.go:82] duration metric: took 11.3109ms for pod "etcd-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.117682    3404 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-201400-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.238978    3404 request.go:632] Waited for 121.2948ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-201400-m03
	I1028 11:35:11.238978    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-201400-m03
	I1028 11:35:11.238978    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.238978    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.238978    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.245636    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:11.438588    3404 request.go:632] Waited for 192.043ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:11.438588    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:11.438588    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.438588    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.438588    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.444290    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:11.444928    3404 pod_ready.go:93] pod "etcd-ha-201400-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:11.444987    3404 pod_ready.go:82] duration metric: took 327.3011ms for pod "etcd-ha-201400-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.444987    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.638590    3404 request.go:632] Waited for 193.5452ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400
	I1028 11:35:11.638590    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400
	I1028 11:35:11.638590    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.639038    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.639038    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.644857    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:11.838687    3404 request.go:632] Waited for 193.7661ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:11.838972    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:11.838972    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:11.838972    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:11.838972    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:11.845130    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:11.845688    3404 pod_ready.go:93] pod "kube-apiserver-ha-201400" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:11.845738    3404 pod_ready.go:82] duration metric: took 400.7464ms for pod "kube-apiserver-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:11.845738    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:12.039295    3404 request.go:632] Waited for 193.5552ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400-m02
	I1028 11:35:12.039295    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400-m02
	I1028 11:35:12.039295    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:12.039295    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:12.039295    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:12.045989    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:12.238767    3404 request.go:632] Waited for 191.4366ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:12.238767    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:12.238767    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:12.238767    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:12.238767    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:12.244867    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:12.245532    3404 pod_ready.go:93] pod "kube-apiserver-ha-201400-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:12.245532    3404 pod_ready.go:82] duration metric: took 399.7897ms for pod "kube-apiserver-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:12.245532    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-201400-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:12.439152    3404 request.go:632] Waited for 193.513ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400-m03
	I1028 11:35:12.439152    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-201400-m03
	I1028 11:35:12.439152    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:12.439152    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:12.439152    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:12.445162    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:12.638476    3404 request.go:632] Waited for 192.2204ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:12.638476    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:12.638476    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:12.638476    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:12.638476    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:12.644225    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:12.644852    3404 pod_ready.go:93] pod "kube-apiserver-ha-201400-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:12.644922    3404 pod_ready.go:82] duration metric: took 399.3861ms for pod "kube-apiserver-ha-201400-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:12.644922    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:12.838668    3404 request.go:632] Waited for 193.6148ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400
	I1028 11:35:12.838668    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400
	I1028 11:35:12.838668    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:12.838668    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:12.838668    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:12.847614    3404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 11:35:13.039753    3404 request.go:632] Waited for 190.8875ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:13.040201    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:13.040257    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:13.040257    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:13.040257    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:13.046885    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:13.047796    3404 pod_ready.go:93] pod "kube-controller-manager-ha-201400" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:13.047796    3404 pod_ready.go:82] duration metric: took 402.8686ms for pod "kube-controller-manager-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:13.047796    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:13.238781    3404 request.go:632] Waited for 190.8336ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400-m02
	I1028 11:35:13.239204    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400-m02
	I1028 11:35:13.239204    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:13.239204    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:13.239204    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:13.245683    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:13.438641    3404 request.go:632] Waited for 192.1838ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:13.439177    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:13.439251    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:13.439251    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:13.439251    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:13.445382    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:13.445916    3404 pod_ready.go:93] pod "kube-controller-manager-ha-201400-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:13.446082    3404 pod_ready.go:82] duration metric: took 398.2166ms for pod "kube-controller-manager-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:13.446082    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-201400-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:13.638606    3404 request.go:632] Waited for 192.5215ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400-m03
	I1028 11:35:13.639028    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-201400-m03
	I1028 11:35:13.639028    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:13.639028    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:13.639028    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:13.644958    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:13.839297    3404 request.go:632] Waited for 193.2724ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:13.839297    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:13.839297    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:13.839297    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:13.839297    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:13.845337    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:13.846070    3404 pod_ready.go:93] pod "kube-controller-manager-ha-201400-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:13.846070    3404 pod_ready.go:82] duration metric: took 399.9833ms for pod "kube-controller-manager-ha-201400-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:13.846174    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fg4c7" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:14.038769    3404 request.go:632] Waited for 192.5926ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fg4c7
	I1028 11:35:14.039075    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fg4c7
	I1028 11:35:14.039075    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:14.039075    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:14.039075    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:14.044408    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:14.238600    3404 request.go:632] Waited for 192.1111ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:14.238600    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:14.239171    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:14.239171    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:14.239171    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:14.245391    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:14.246032    3404 pod_ready.go:93] pod "kube-proxy-fg4c7" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:14.246032    3404 pod_ready.go:82] duration metric: took 399.8534ms for pod "kube-proxy-fg4c7" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:14.246032    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hkdzx" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:14.438891    3404 request.go:632] Waited for 192.8564ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkdzx
	I1028 11:35:14.439159    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkdzx
	I1028 11:35:14.439159    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:14.439159    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:14.439159    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:14.445836    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:14.639064    3404 request.go:632] Waited for 192.7643ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:14.639525    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:14.639632    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:14.639632    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:14.639698    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:14.647950    3404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 11:35:14.648744    3404 pod_ready.go:93] pod "kube-proxy-hkdzx" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:14.648744    3404 pod_ready.go:82] duration metric: took 402.7072ms for pod "kube-proxy-hkdzx" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:14.648744    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rn4tk" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:14.838860    3404 request.go:632] Waited for 190.1144ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rn4tk
	I1028 11:35:14.839211    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rn4tk
	I1028 11:35:14.839211    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:14.839211    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:14.839211    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:14.844175    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:15.039410    3404 request.go:632] Waited for 194.1289ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:15.039809    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:15.039809    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:15.039809    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:15.039809    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:15.046859    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:35:15.047493    3404 pod_ready.go:93] pod "kube-proxy-rn4tk" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:15.047553    3404 pod_ready.go:82] duration metric: took 398.8048ms for pod "kube-proxy-rn4tk" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:15.047611    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:15.240202    3404 request.go:632] Waited for 192.5306ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400
	I1028 11:35:15.240202    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400
	I1028 11:35:15.240202    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:15.240810    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:15.240810    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:15.247384    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:15.438690    3404 request.go:632] Waited for 190.3211ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:15.439105    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400
	I1028 11:35:15.439105    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:15.439105    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:15.439105    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:15.448552    3404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:35:15.450002    3404 pod_ready.go:93] pod "kube-scheduler-ha-201400" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:15.450061    3404 pod_ready.go:82] duration metric: took 402.4451ms for pod "kube-scheduler-ha-201400" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:15.450119    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:15.638593    3404 request.go:632] Waited for 188.4119ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400-m02
	I1028 11:35:15.638593    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400-m02
	I1028 11:35:15.638593    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:15.638593    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:15.638593    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:15.642738    3404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:35:15.839292    3404 request.go:632] Waited for 194.3681ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:15.839686    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m02
	I1028 11:35:15.839686    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:15.839686    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:15.839686    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:15.845499    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:15.846530    3404 pod_ready.go:93] pod "kube-scheduler-ha-201400-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:15.846530    3404 pod_ready.go:82] duration metric: took 396.4069ms for pod "kube-scheduler-ha-201400-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:15.846530    3404 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-201400-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:16.039364    3404 request.go:632] Waited for 192.8318ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400-m03
	I1028 11:35:16.039364    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-201400-m03
	I1028 11:35:16.039364    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:16.039364    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:16.039364    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:16.045609    3404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:35:16.238515    3404 request.go:632] Waited for 191.5595ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:16.238515    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes/ha-201400-m03
	I1028 11:35:16.238515    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:16.238515    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:16.238515    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:16.245885    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:35:16.246886    3404 pod_ready.go:93] pod "kube-scheduler-ha-201400-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:35:16.246945    3404 pod_ready.go:82] duration metric: took 400.3511ms for pod "kube-scheduler-ha-201400-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:35:16.246945    3404 pod_ready.go:39] duration metric: took 5.2013408s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:35:16.247004    3404 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:35:16.257998    3404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:35:16.290742    3404 api_server.go:72] duration metric: took 25.7342916s to wait for apiserver process to appear ...
	I1028 11:35:16.290804    3404 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:35:16.290804    3404 api_server.go:253] Checking apiserver healthz at https://172.27.248.250:8443/healthz ...
	I1028 11:35:16.301461    3404 api_server.go:279] https://172.27.248.250:8443/healthz returned 200:
	ok
	I1028 11:35:16.301461    3404 round_trippers.go:463] GET https://172.27.248.250:8443/version
	I1028 11:35:16.301461    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:16.301461    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:16.301461    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:16.303925    3404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:35:16.303925    3404 api_server.go:141] control plane version: v1.31.2
	I1028 11:35:16.303925    3404 api_server.go:131] duration metric: took 13.1207ms to wait for apiserver health ...
	I1028 11:35:16.303925    3404 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:35:16.439254    3404 request.go:632] Waited for 135.3279ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:35:16.439254    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:35:16.439254    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:16.439254    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:16.439254    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:16.451407    3404 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1028 11:35:16.464550    3404 system_pods.go:59] 24 kube-system pods found
	I1028 11:35:16.464550    3404 system_pods.go:61] "coredns-7c65d6cfc9-n2qnf" [0a59b0c9-3860-43bb-9a01-6de060a0d8ec] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "coredns-7c65d6cfc9-zt6f6" [5670c78d-aab6-44c3-8f4a-09c21d9844fb] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "etcd-ha-201400" [54f8addb-a080-4679-b37b-561992049222] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "etcd-ha-201400-m02" [5928b40b-449f-4535-93ca-ecdcc4aad10c] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "etcd-ha-201400-m03" [b9057ad6-62aa-4b43-845a-bbf864d71066] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kindnet-5xvlb" [3561e5ab-664f-4377-ab6a-287cd5f68d85] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kindnet-cwkwx" [dee24e8e-11df-4371-bcc4-036802ed78f7] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kindnet-d99h6" [46efe528-d003-4a59-b6a3-f76548c4c236] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-apiserver-ha-201400" [b77b449a-eea7-4ac8-a482-01d4405aaff1] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-apiserver-ha-201400-m02" [f1405ed9-f514-4db2-acb3-f958193d27d4] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-apiserver-ha-201400-m03" [c4b4e094-2ef6-44b6-90a1-9ec79e7f83f1] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-controller-manager-ha-201400" [bbfa9c31-e00f-4f74-97d0-d2b617a99bda] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-controller-manager-ha-201400-m02" [bf76a2dc-b766-4642-92e4-15817852695d] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-controller-manager-ha-201400-m03" [544cf071-e35d-42c9-bc3e-bcc74426e10a] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-proxy-fg4c7" [a66dd88b-ed1d-4de7-b8cd-fec1b0e3b7c4] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-proxy-hkdzx" [9c126bc0-db8a-442f-8e5a-a3cff3771d84] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-proxy-rn4tk" [b39a95c7-89e2-4c00-8506-3de2d9c161be] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-scheduler-ha-201400" [942de188-c1af-4430-b6d5-0eada56b52a7] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-scheduler-ha-201400-m02" [fcf968aa-fd96-4530-8f89-09d508d9a0a4] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-scheduler-ha-201400-m03" [e9723214-ff30-45ae-8572-80c03b363255] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-vip-ha-201400" [01349bec-7565-423f-b7a3-f4901c3aac8c] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-vip-ha-201400-m02" [b9349828-0d42-41c9-a291-1147a6c5e426] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "kube-vip-ha-201400-m03" [3f7e58ed-ce82-4278-989c-2aab7e02b15f] Running
	I1028 11:35:16.464550    3404 system_pods.go:61] "storage-provisioner" [3610335c-bd06-440a-b518-cf74dd5af220] Running
	I1028 11:35:16.464550    3404 system_pods.go:74] duration metric: took 160.6231ms to wait for pod list to return data ...
	I1028 11:35:16.465237    3404 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:35:16.639107    3404 request.go:632] Waited for 173.7499ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:35:16.639107    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:35:16.639107    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:16.639107    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:16.639314    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:16.644479    3404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:35:16.644479    3404 default_sa.go:45] found service account: "default"
	I1028 11:35:16.644479    3404 default_sa.go:55] duration metric: took 179.2401ms for default service account to be created ...
	I1028 11:35:16.644479    3404 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:35:16.839028    3404 request.go:632] Waited for 194.5475ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:35:16.839642    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/namespaces/kube-system/pods
	I1028 11:35:16.839642    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:16.839642    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:16.839642    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:16.849856    3404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:35:16.859972    3404 system_pods.go:86] 24 kube-system pods found
	I1028 11:35:16.859972    3404 system_pods.go:89] "coredns-7c65d6cfc9-n2qnf" [0a59b0c9-3860-43bb-9a01-6de060a0d8ec] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "coredns-7c65d6cfc9-zt6f6" [5670c78d-aab6-44c3-8f4a-09c21d9844fb] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "etcd-ha-201400" [54f8addb-a080-4679-b37b-561992049222] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "etcd-ha-201400-m02" [5928b40b-449f-4535-93ca-ecdcc4aad10c] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "etcd-ha-201400-m03" [b9057ad6-62aa-4b43-845a-bbf864d71066] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kindnet-5xvlb" [3561e5ab-664f-4377-ab6a-287cd5f68d85] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kindnet-cwkwx" [dee24e8e-11df-4371-bcc4-036802ed78f7] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kindnet-d99h6" [46efe528-d003-4a59-b6a3-f76548c4c236] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kube-apiserver-ha-201400" [b77b449a-eea7-4ac8-a482-01d4405aaff1] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kube-apiserver-ha-201400-m02" [f1405ed9-f514-4db2-acb3-f958193d27d4] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kube-apiserver-ha-201400-m03" [c4b4e094-2ef6-44b6-90a1-9ec79e7f83f1] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kube-controller-manager-ha-201400" [bbfa9c31-e00f-4f74-97d0-d2b617a99bda] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kube-controller-manager-ha-201400-m02" [bf76a2dc-b766-4642-92e4-15817852695d] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kube-controller-manager-ha-201400-m03" [544cf071-e35d-42c9-bc3e-bcc74426e10a] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kube-proxy-fg4c7" [a66dd88b-ed1d-4de7-b8cd-fec1b0e3b7c4] Running
	I1028 11:35:16.859972    3404 system_pods.go:89] "kube-proxy-hkdzx" [9c126bc0-db8a-442f-8e5a-a3cff3771d84] Running
	I1028 11:35:16.860504    3404 system_pods.go:89] "kube-proxy-rn4tk" [b39a95c7-89e2-4c00-8506-3de2d9c161be] Running
	I1028 11:35:16.860504    3404 system_pods.go:89] "kube-scheduler-ha-201400" [942de188-c1af-4430-b6d5-0eada56b52a7] Running
	I1028 11:35:16.860504    3404 system_pods.go:89] "kube-scheduler-ha-201400-m02" [fcf968aa-fd96-4530-8f89-09d508d9a0a4] Running
	I1028 11:35:16.860504    3404 system_pods.go:89] "kube-scheduler-ha-201400-m03" [e9723214-ff30-45ae-8572-80c03b363255] Running
	I1028 11:35:16.860535    3404 system_pods.go:89] "kube-vip-ha-201400" [01349bec-7565-423f-b7a3-f4901c3aac8c] Running
	I1028 11:35:16.860535    3404 system_pods.go:89] "kube-vip-ha-201400-m02" [b9349828-0d42-41c9-a291-1147a6c5e426] Running
	I1028 11:35:16.860535    3404 system_pods.go:89] "kube-vip-ha-201400-m03" [3f7e58ed-ce82-4278-989c-2aab7e02b15f] Running
	I1028 11:35:16.860535    3404 system_pods.go:89] "storage-provisioner" [3610335c-bd06-440a-b518-cf74dd5af220] Running
	I1028 11:35:16.860535    3404 system_pods.go:126] duration metric: took 216.0542ms to wait for k8s-apps to be running ...
	I1028 11:35:16.860535    3404 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:35:16.871318    3404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:35:16.901278    3404 system_svc.go:56] duration metric: took 40.742ms WaitForService to wait for kubelet
	I1028 11:35:16.901278    3404 kubeadm.go:582] duration metric: took 26.3448209s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:35:16.901278    3404 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:35:17.042067    3404 request.go:632] Waited for 140.6084ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.248.250:8443/api/v1/nodes
	I1028 11:35:17.042067    3404 round_trippers.go:463] GET https://172.27.248.250:8443/api/v1/nodes
	I1028 11:35:17.042067    3404 round_trippers.go:469] Request Headers:
	I1028 11:35:17.042067    3404 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:35:17.042067    3404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:35:17.049884    3404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:35:17.051378    3404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:35:17.051378    3404 node_conditions.go:123] node cpu capacity is 2
	I1028 11:35:17.051378    3404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:35:17.051378    3404 node_conditions.go:123] node cpu capacity is 2
	I1028 11:35:17.051378    3404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:35:17.051378    3404 node_conditions.go:123] node cpu capacity is 2
	I1028 11:35:17.051378    3404 node_conditions.go:105] duration metric: took 149.9193ms to run NodePressure ...
	I1028 11:35:17.051378    3404 start.go:241] waiting for startup goroutines ...
	I1028 11:35:17.051554    3404 start.go:255] writing updated cluster config ...
	I1028 11:35:17.064594    3404 ssh_runner.go:195] Run: rm -f paused
	I1028 11:35:17.222554    3404 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 11:35:17.227166    3404 out.go:177] * Done! kubectl is now configured to use "ha-201400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 28 11:27:08 ha-201400 cri-dockerd[1329]: time="2024-10-28T11:27:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/49c04a2d5e21812b2c7e82476fb91f9b76c877eeca25c4e66382aa63b56e502b/resolv.conf as [nameserver 172.27.240.1]"
	Oct 28 11:27:08 ha-201400 cri-dockerd[1329]: time="2024-10-28T11:27:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eb2aa4b2b548eef445230cf2c3a200766113aeb266ecc8cf69faaa49088039ce/resolv.conf as [nameserver 172.27.240.1]"
	Oct 28 11:27:08 ha-201400 cri-dockerd[1329]: time="2024-10-28T11:27:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2dbff880c5c17f28b4eec93c33d392f3dba70e66dc941a97f0942d10a0cb1e19/resolv.conf as [nameserver 172.27.240.1]"
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.530587825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.530670626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.530690326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.531018130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.536958698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.537189900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.537236501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.537965909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.574531628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.574698829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.574775530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:27:08 ha-201400 dockerd[1437]: time="2024-10-28T11:27:08.575774742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:35:58 ha-201400 dockerd[1437]: time="2024-10-28T11:35:58.145206068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 11:35:58 ha-201400 dockerd[1437]: time="2024-10-28T11:35:58.145336870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 11:35:58 ha-201400 dockerd[1437]: time="2024-10-28T11:35:58.145355870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:35:58 ha-201400 dockerd[1437]: time="2024-10-28T11:35:58.145716676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:35:58 ha-201400 cri-dockerd[1329]: time="2024-10-28T11:35:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1b312bd0d3e3e3de763a1951c21f9ab365e129d50fb50ed7e88db6c55a29fffb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 28 11:35:59 ha-201400 cri-dockerd[1329]: time="2024-10-28T11:35:59Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 28 11:36:00 ha-201400 dockerd[1437]: time="2024-10-28T11:36:00.079607066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 11:36:00 ha-201400 dockerd[1437]: time="2024-10-28T11:36:00.079705667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 11:36:00 ha-201400 dockerd[1437]: time="2024-10-28T11:36:00.079764768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:36:00 ha-201400 dockerd[1437]: time="2024-10-28T11:36:00.080064371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	209e04121e9c7       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   1b312bd0d3e3e       busybox-7dff88458-gp9fd
	ce3d7e9066412       c69fa2e9cbf5f                                                                                         26 minutes ago      Running             coredns                   0                   eb2aa4b2b548e       coredns-7c65d6cfc9-n2qnf
	64d978358caa1       c69fa2e9cbf5f                                                                                         26 minutes ago      Running             coredns                   0                   49c04a2d5e218       coredns-7c65d6cfc9-zt6f6
	b639363d7d172       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   2dbff880c5c17       storage-provisioner
	7f47c99a1a2a9       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              26 minutes ago      Running             kindnet-cni               0                   363f5ef872145       kindnet-d99h6
	9f51b5bae8691       505d571f5fd56                                                                                         27 minutes ago      Running             kube-proxy                0                   e90af1cf3bbde       kube-proxy-fg4c7
	ab049c2140bcb       ghcr.io/kube-vip/kube-vip@sha256:b5049ac9e9e750783c32c69b88c48f7b0efb6b23f94f656471d5f82222fe1b72     27 minutes ago      Running             kube-vip                  0                   4f8837814079a       kube-vip-ha-201400
	afe94cc393c22       847c7bc1a5418                                                                                         27 minutes ago      Running             kube-scheduler            0                   fe63d450fb737       kube-scheduler-ha-201400
	fa49f1d4e69ac       9499c9960544e                                                                                         27 minutes ago      Running             kube-apiserver            0                   0ec9b0145aa57       kube-apiserver-ha-201400
	c2bfb2f1e6510       2e96e5913fc06                                                                                         27 minutes ago      Running             etcd                      0                   11a6643cdc967       etcd-ha-201400
	d70ee194fe7fd       0486b6c53a1b5                                                                                         27 minutes ago      Running             kube-controller-manager   0                   f8e1bb9eda406       kube-controller-manager-ha-201400
	
	
	==> coredns [64d978358caa] <==
	[INFO] 10.244.1.2:42364 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000074701s
	[INFO] 10.244.0.4:39059 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000219603s
	[INFO] 10.244.3.2:41051 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000223203s
	[INFO] 10.244.3.2:52465 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.060785981s
	[INFO] 10.244.3.2:47473 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000179802s
	[INFO] 10.244.3.2:37784 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01243484s
	[INFO] 10.244.3.2:60128 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131002s
	[INFO] 10.244.1.2:58405 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000678907s
	[INFO] 10.244.1.2:41270 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000208202s
	[INFO] 10.244.0.4:55035 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000137402s
	[INFO] 10.244.0.4:41846 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000191502s
	[INFO] 10.244.0.4:57771 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000345904s
	[INFO] 10.244.3.2:52220 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000208603s
	[INFO] 10.244.1.2:36760 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116701s
	[INFO] 10.244.1.2:42206 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173101s
	[INFO] 10.244.1.2:38287 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147901s
	[INFO] 10.244.0.4:58812 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000302003s
	[INFO] 10.244.0.4:37201 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000241703s
	[INFO] 10.244.0.4:46594 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000281503s
	[INFO] 10.244.3.2:38659 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000945511s
	[INFO] 10.244.3.2:35862 - 5 "PTR IN 1.240.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186602s
	[INFO] 10.244.1.2:52364 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000277003s
	[INFO] 10.244.0.4:43333 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175902s
	[INFO] 10.244.0.4:55448 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102801s
	[INFO] 10.244.0.4:35819 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000076401s
	
	
	==> coredns [ce3d7e906641] <==
	[INFO] 10.244.3.2:48251 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000271503s
	[INFO] 10.244.3.2:38266 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000516706s
	[INFO] 10.244.3.2:60132 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223902s
	[INFO] 10.244.1.2:46194 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363404s
	[INFO] 10.244.1.2:41842 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.015410673s
	[INFO] 10.244.1.2:47891 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155402s
	[INFO] 10.244.1.2:33575 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000335703s
	[INFO] 10.244.1.2:40207 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000146002s
	[INFO] 10.244.1.2:57094 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071201s
	[INFO] 10.244.0.4:41269 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000333604s
	[INFO] 10.244.0.4:32903 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000070601s
	[INFO] 10.244.0.4:42397 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000201703s
	[INFO] 10.244.0.4:37058 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.031470952s
	[INFO] 10.244.0.4:47788 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094801s
	[INFO] 10.244.3.2:49058 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126501s
	[INFO] 10.244.3.2:39030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000388204s
	[INFO] 10.244.3.2:56997 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172602s
	[INFO] 10.244.1.2:45147 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000236303s
	[INFO] 10.244.0.4:53698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000471306s
	[INFO] 10.244.3.2:45832 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000226402s
	[INFO] 10.244.3.2:44628 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000317503s
	[INFO] 10.244.1.2:35552 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207202s
	[INFO] 10.244.1.2:35517 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000276603s
	[INFO] 10.244.1.2:32969 - 5 "PTR IN 1.240.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000203702s
	[INFO] 10.244.0.4:40599 - 5 "PTR IN 1.240.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000402805s
	
	
	==> describe nodes <==
	Name:               ha-201400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-201400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-201400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T11_26_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:26:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-201400
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:53:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:51:30 +0000   Mon, 28 Oct 2024 11:26:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:51:30 +0000   Mon, 28 Oct 2024 11:26:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:51:30 +0000   Mon, 28 Oct 2024 11:26:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:51:30 +0000   Mon, 28 Oct 2024 11:27:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.248.250
	  Hostname:    ha-201400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d9b21fc4b6e43f192a470b2b32c065c
	  System UUID:                4d027834-1578-3349-910e-6bd5fd5d19d3
	  Boot ID:                    938bfcc6-b024-401d-adf6-d844cbceb838
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gp9fd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-7c65d6cfc9-n2qnf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 coredns-7c65d6cfc9-zt6f6             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 etcd-ha-201400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         27m
	  kube-system                 kindnet-d99h6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-ha-201400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-controller-manager-ha-201400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-fg4c7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-ha-201400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-vip-ha-201400                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26m   kube-proxy       
	  Normal  Starting                 27m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m   kubelet          Node ha-201400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m   kubelet          Node ha-201400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m   kubelet          Node ha-201400 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27m   node-controller  Node ha-201400 event: Registered Node ha-201400 in Controller
	  Normal  NodeReady                26m   kubelet          Node ha-201400 status is now: NodeReady
	  Normal  RegisteredNode           23m   node-controller  Node ha-201400 event: Registered Node ha-201400 in Controller
	  Normal  RegisteredNode           18m   node-controller  Node ha-201400 event: Registered Node ha-201400 in Controller
	
	
	Name:               ha-201400-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-201400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-201400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_30_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:30:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-201400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:53:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:51:26 +0000   Mon, 28 Oct 2024 11:30:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:51:26 +0000   Mon, 28 Oct 2024 11:30:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:51:26 +0000   Mon, 28 Oct 2024 11:30:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:51:26 +0000   Mon, 28 Oct 2024 11:30:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.250.174
	  Hostname:    ha-201400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 67e52766d1694419a06b7897d409cd07
	  System UUID:                2f914c11-708e-3647-87e1-cddb2789e410
	  Boot ID:                    bcae8fc4-a9dd-40fa-9483-4c4c8a12d2e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cvthb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-201400-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         23m
	  kube-system                 kindnet-cwkwx                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	  kube-system                 kube-apiserver-ha-201400-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-201400-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-hkdzx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-201400-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-201400-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-201400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-201400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-201400-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     23m                cidrAllocator    Node ha-201400-m02 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           23m                node-controller  Node ha-201400-m02 event: Registered Node ha-201400-m02 in Controller
	  Normal  RegisteredNode           23m                node-controller  Node ha-201400-m02 event: Registered Node ha-201400-m02 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-201400-m02 event: Registered Node ha-201400-m02 in Controller
	
	
	Name:               ha-201400-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-201400-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-201400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_34_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:34:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-201400-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:53:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:51:31 +0000   Mon, 28 Oct 2024 11:34:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:51:31 +0000   Mon, 28 Oct 2024 11:34:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:51:31 +0000   Mon, 28 Oct 2024 11:34:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:51:31 +0000   Mon, 28 Oct 2024 11:35:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.254.230
	  Hostname:    ha-201400-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 9cbf09a9edff42f8a4b57c2e4fd514f4
	  System UUID:                d9636759-aa61-cc45-ad61-dd9dce51708f
	  Boot ID:                    4b48913b-bf4e-45b2-91b5-e33dd68d8730
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-b84wl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-201400-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kindnet-5xvlb                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-apiserver-ha-201400-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-201400-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-rn4tk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-201400-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-201400-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  CIDRAssignmentFailed     19m                cidrAllocator    Node ha-201400-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-201400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-201400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-201400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-201400-m03 event: Registered Node ha-201400-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-201400-m03 event: Registered Node ha-201400-m03 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-201400-m03 event: Registered Node ha-201400-m03 in Controller
	
	
	Name:               ha-201400-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-201400-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-201400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_40_29_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:40:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-201400-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:53:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:51:13 +0000   Mon, 28 Oct 2024 11:40:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:51:13 +0000   Mon, 28 Oct 2024 11:40:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:51:13 +0000   Mon, 28 Oct 2024 11:40:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:51:13 +0000   Mon, 28 Oct 2024 11:41:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.250.248
	  Hostname:    ha-201400-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0bef9fb407d400e881dd6487557a0ca
	  System UUID:                549aeb95-f2f1-0242-9c6e-5e2916dc2cf0
	  Boot ID:                    84254b7c-79d8-4bc7-806a-67e1f73d3a5f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6rtkh       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-ccrlt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  CIDRAssignmentFailed     13m                cidrAllocator    Node ha-201400-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-201400-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-201400-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-201400-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-201400-m04 event: Registered Node ha-201400-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-201400-m04 event: Registered Node ha-201400-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-201400-m04 event: Registered Node ha-201400-m04 in Controller
	  Normal  NodeReady                12m                kubelet          Node ha-201400-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.248029] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct28 11:25] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.189155] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[Oct28 11:26] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.140755] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.581345] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +0.216742] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[  +0.231896] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +2.928180] systemd-fstab-generator[1282]: Ignoring "noauto" option for root device
	[  +0.223278] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.194653] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.283776] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[ +12.200504] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +0.111987] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.173282] systemd-fstab-generator[1678]: Ignoring "noauto" option for root device
	[  +6.810485] systemd-fstab-generator[1829]: Ignoring "noauto" option for root device
	[  +0.113552] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.887892] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.541446] systemd-fstab-generator[2324]: Ignoring "noauto" option for root device
	[  +5.371714] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.727467] kauditd_printk_skb: 29 callbacks suppressed
	[Oct28 11:30] kauditd_printk_skb: 26 callbacks suppressed
	[Oct28 11:40] hrtimer: interrupt took 3550532 ns
	
	
	==> etcd [c2bfb2f1e651] <==
	{"level":"info","ts":"2024-10-28T11:40:39.807421Z","caller":"traceutil/trace.go:171","msg":"trace[2010329053] linearizableReadLoop","detail":"{readStateIndex:3152; appliedIndex:3154; }","duration":"330.546995ms","start":"2024-10-28T11:40:39.476847Z","end":"2024-10-28T11:40:39.807394Z","steps":["trace[2010329053] 'read index received'  (duration: 330.542195ms)","trace[2010329053] 'applied index is now lower than readState.Index'  (duration: 3.7µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T11:40:39.807821Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"330.952898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-201400-m04\" ","response":"range_response_count:1 size:2813"}
	{"level":"info","ts":"2024-10-28T11:40:39.807964Z","caller":"traceutil/trace.go:171","msg":"trace[690158193] range","detail":"{range_begin:/registry/minions/ha-201400-m04; range_end:; response_count:1; response_revision:2636; }","duration":"331.1134ms","start":"2024-10-28T11:40:39.476842Z","end":"2024-10-28T11:40:39.807955Z","steps":["trace[690158193] 'agreement among raft nodes before linearized reading'  (duration: 330.739896ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:40:39.808000Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T11:40:39.476776Z","time spent":"331.212501ms","remote":"127.0.0.1:44590","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":2836,"request content":"key:\"/registry/minions/ha-201400-m04\" "}
	{"level":"info","ts":"2024-10-28T11:40:39.808502Z","caller":"traceutil/trace.go:171","msg":"trace[1296192813] transaction","detail":"{read_only:false; response_revision:2637; number_of_response:1; }","duration":"262.306877ms","start":"2024-10-28T11:40:39.546182Z","end":"2024-10-28T11:40:39.808489Z","steps":["trace[1296192813] 'process raft request'  (duration: 262.108775ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:40:39.809488Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.241433ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-10-28T11:40:39.811514Z","caller":"traceutil/trace.go:171","msg":"trace[854708602] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2637; }","duration":"204.268651ms","start":"2024-10-28T11:40:39.607232Z","end":"2024-10-28T11:40:39.811501Z","steps":["trace[854708602] 'agreement among raft nodes before linearized reading'  (duration: 202.196032ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:40:39.928510Z","caller":"traceutil/trace.go:171","msg":"trace[729111278] transaction","detail":"{read_only:false; response_revision:2639; number_of_response:1; }","duration":"110.996506ms","start":"2024-10-28T11:40:39.817495Z","end":"2024-10-28T11:40:39.928492Z","steps":["trace[729111278] 'process raft request'  (duration: 92.999043ms)","trace[729111278] 'compare'  (duration: 17.909762ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:40:40.180502Z","caller":"traceutil/trace.go:171","msg":"trace[2076614846] linearizableReadLoop","detail":"{readStateIndex:3157; appliedIndex:3157; }","duration":"214.547743ms","start":"2024-10-28T11:40:39.965933Z","end":"2024-10-28T11:40:40.180481Z","steps":["trace[2076614846] 'read index received'  (duration: 214.542743ms)","trace[2076614846] 'applied index is now lower than readState.Index'  (duration: 3.7µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:40:40.240632Z","caller":"traceutil/trace.go:171","msg":"trace[1054356893] transaction","detail":"{read_only:false; response_revision:2640; number_of_response:1; }","duration":"284.424276ms","start":"2024-10-28T11:40:39.956186Z","end":"2024-10-28T11:40:40.240610Z","steps":["trace[1054356893] 'process raft request'  (duration: 224.957537ms)","trace[1054356893] 'compare'  (duration: 58.46413ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T11:40:40.284354Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.04179ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-201400-m04\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-28T11:40:40.284435Z","caller":"traceutil/trace.go:171","msg":"trace[1458438580] range","detail":"{range_begin:/registry/minions/ha-201400-m04; range_end:; response_count:1; response_revision:2640; }","duration":"308.13079ms","start":"2024-10-28T11:40:39.976289Z","end":"2024-10-28T11:40:40.284420Z","steps":["trace[1458438580] 'agreement among raft nodes before linearized reading'  (duration: 265.0053ms)","trace[1458438580] 'range keys from in-memory index tree'  (duration: 43.011689ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T11:40:40.284748Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T11:40:39.976275Z","time spent":"308.456893ms","remote":"127.0.0.1:44590","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":3137,"request content":"key:\"/registry/minions/ha-201400-m04\" "}
	{"level":"info","ts":"2024-10-28T11:40:45.555503Z","caller":"traceutil/trace.go:171","msg":"trace[2047083800] transaction","detail":"{read_only:false; response_revision:2657; number_of_response:1; }","duration":"104.233241ms","start":"2024-10-28T11:40:45.451252Z","end":"2024-10-28T11:40:45.555486Z","steps":["trace[2047083800] 'process raft request'  (duration: 103.975039ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:40:45.780276Z","caller":"traceutil/trace.go:171","msg":"trace[62485144] transaction","detail":"{read_only:false; response_revision:2658; number_of_response:1; }","duration":"128.220957ms","start":"2024-10-28T11:40:45.652037Z","end":"2024-10-28T11:40:45.780258Z","steps":["trace[62485144] 'process raft request'  (duration: 112.585116ms)","trace[62485144] 'compare'  (duration: 15.437139ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:40:46.478264Z","caller":"traceutil/trace.go:171","msg":"trace[1599978733] transaction","detail":"{read_only:false; response_revision:2661; number_of_response:1; }","duration":"188.141197ms","start":"2024-10-28T11:40:46.290106Z","end":"2024-10-28T11:40:46.478247Z","steps":["trace[1599978733] 'process raft request'  (duration: 187.725993ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:41:33.284165Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1961}
	{"level":"info","ts":"2024-10-28T11:41:33.344952Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1961,"took":"58.560013ms","hash":2833721137,"current-db-size-bytes":3637248,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":2306048,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-10-28T11:41:33.345034Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2833721137,"revision":1961,"compact-revision":1043}
	{"level":"info","ts":"2024-10-28T11:46:33.310801Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2789}
	{"level":"info","ts":"2024-10-28T11:46:33.374099Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2789,"took":"62.018967ms","hash":2733450747,"current-db-size-bytes":3637248,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":2183168,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-10-28T11:46:33.374175Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2733450747,"revision":2789,"compact-revision":1961}
	{"level":"info","ts":"2024-10-28T11:51:33.338529Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":3530}
	{"level":"info","ts":"2024-10-28T11:51:33.393283Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":3530,"took":"53.96097ms","hash":4153736554,"current-db-size-bytes":3637248,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":1908736,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2024-10-28T11:51:33.393431Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4153736554,"revision":3530,"compact-revision":2789}
	
	
	==> kernel <==
	 11:53:45 up 29 min,  0 users,  load average: 0.47, 0.56, 0.54
	Linux ha-201400 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7f47c99a1a2a] <==
	I1028 11:53:14.705789       1 main.go:323] Node ha-201400-m04 has CIDR [10.244.4.0/24] 
	I1028 11:53:24.710066       1 main.go:296] Handling node with IPs: map[172.27.250.248:{}]
	I1028 11:53:24.710177       1 main.go:323] Node ha-201400-m04 has CIDR [10.244.4.0/24] 
	I1028 11:53:24.710528       1 main.go:296] Handling node with IPs: map[172.27.248.250:{}]
	I1028 11:53:24.710620       1 main.go:300] handling current node
	I1028 11:53:24.710643       1 main.go:296] Handling node with IPs: map[172.27.250.174:{}]
	I1028 11:53:24.710726       1 main.go:323] Node ha-201400-m02 has CIDR [10.244.1.0/24] 
	I1028 11:53:24.711167       1 main.go:296] Handling node with IPs: map[172.27.254.230:{}]
	I1028 11:53:24.711184       1 main.go:323] Node ha-201400-m03 has CIDR [10.244.3.0/24] 
	I1028 11:53:34.713012       1 main.go:296] Handling node with IPs: map[172.27.248.250:{}]
	I1028 11:53:34.713170       1 main.go:300] handling current node
	I1028 11:53:34.713197       1 main.go:296] Handling node with IPs: map[172.27.250.174:{}]
	I1028 11:53:34.713214       1 main.go:323] Node ha-201400-m02 has CIDR [10.244.1.0/24] 
	I1028 11:53:34.713795       1 main.go:296] Handling node with IPs: map[172.27.254.230:{}]
	I1028 11:53:34.714245       1 main.go:323] Node ha-201400-m03 has CIDR [10.244.3.0/24] 
	I1028 11:53:34.714812       1 main.go:296] Handling node with IPs: map[172.27.250.248:{}]
	I1028 11:53:34.715136       1 main.go:323] Node ha-201400-m04 has CIDR [10.244.4.0/24] 
	I1028 11:53:44.710484       1 main.go:296] Handling node with IPs: map[172.27.248.250:{}]
	I1028 11:53:44.710583       1 main.go:300] handling current node
	I1028 11:53:44.710667       1 main.go:296] Handling node with IPs: map[172.27.250.174:{}]
	I1028 11:53:44.710680       1 main.go:323] Node ha-201400-m02 has CIDR [10.244.1.0/24] 
	I1028 11:53:44.711626       1 main.go:296] Handling node with IPs: map[172.27.254.230:{}]
	I1028 11:53:44.711644       1 main.go:323] Node ha-201400-m03 has CIDR [10.244.3.0/24] 
	I1028 11:53:44.712779       1 main.go:296] Handling node with IPs: map[172.27.250.248:{}]
	I1028 11:53:44.714681       1 main.go:323] Node ha-201400-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [fa49f1d4e69a] <==
	I1028 11:26:39.820016       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 11:26:39.858551       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1028 11:26:39.913705       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 11:26:43.735026       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1028 11:26:44.335558       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1028 11:34:43.221693       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 10.5µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1028 11:34:43.222136       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1028 11:34:43.301320       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1028 11:34:43.317282       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1028 11:34:43.352052       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="111.66733ms" method="PATCH" path="/api/v1/namespaces/default/events/ha-201400-m03.18029aaec6f3a61c" result=null
	E1028 11:36:04.232222       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58141: use of closed network connection
	E1028 11:36:04.810503       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58143: use of closed network connection
	E1028 11:36:06.638438       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58145: use of closed network connection
	E1028 11:36:07.284460       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58147: use of closed network connection
	E1028 11:36:07.886130       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58149: use of closed network connection
	E1028 11:36:08.476464       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58151: use of closed network connection
	E1028 11:36:09.041023       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58153: use of closed network connection
	E1028 11:36:09.619643       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58155: use of closed network connection
	E1028 11:36:10.190242       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58157: use of closed network connection
	E1028 11:36:11.240344       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58160: use of closed network connection
	E1028 11:36:21.803256       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58162: use of closed network connection
	E1028 11:36:22.384039       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58165: use of closed network connection
	E1028 11:36:32.943672       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58167: use of closed network connection
	E1028 11:36:33.500830       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58170: use of closed network connection
	E1028 11:36:44.047431       1 conn.go:339] Error on socket receive: read tcp 172.27.255.254:8443->172.27.240.1:58172: use of closed network connection
	
	
	==> kube-controller-manager [d70ee194fe7f] <==
	I1028 11:40:29.549425       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m04"
	I1028 11:40:30.202068       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m04"
	I1028 11:40:31.043118       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m04"
	I1028 11:40:33.811928       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m04"
	I1028 11:40:33.812360       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-201400-m04"
	I1028 11:40:34.088481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m04"
	I1028 11:40:34.393916       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m04"
	I1028 11:40:34.423530       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m04"
	I1028 11:40:39.812818       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m04"
	I1028 11:41:00.241757       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m04"
	I1028 11:41:01.925756       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m04"
	I1028 11:41:01.928523       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-201400-m04"
	I1028 11:41:01.953632       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m04"
	I1028 11:41:03.849118       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m04"
	I1028 11:41:15.937459       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m02"
	I1028 11:41:19.770535       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400"
	I1028 11:41:20.129225       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m03"
	I1028 11:46:06.358556       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m04"
	I1028 11:46:21.160305       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m02"
	I1028 11:46:25.429220       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m03"
	I1028 11:46:25.934370       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400"
	I1028 11:51:13.816162       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m04"
	I1028 11:51:26.255650       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m02"
	I1028 11:51:30.567123       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400"
	I1028 11:51:31.832939       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-201400-m03"
	
	
	==> kube-proxy [9f51b5bae869] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 11:26:45.681723       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 11:26:45.724318       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.27.248.250"]
	E1028 11:26:45.724406       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:26:45.800088       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 11:26:45.800155       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 11:26:45.800204       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:26:45.804211       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 11:26:45.804955       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:26:45.805041       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:26:45.809285       1 config.go:199] "Starting service config controller"
	I1028 11:26:45.809516       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:26:45.809739       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:26:45.810087       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:26:45.812617       1 config.go:328] "Starting node config controller"
	I1028 11:26:45.812756       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:26:45.910473       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 11:26:45.910541       1 shared_informer.go:320] Caches are synced for service config
	I1028 11:26:45.913305       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [afe94cc393c2] <==
	E1028 11:26:36.794820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:36.845101       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 11:26:36.845167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:36.872928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 11:26:36.873157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:36.965316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 11:26:36.965385       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:37.026414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 11:26:37.026730       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:37.083538       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 11:26:37.083671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:37.117377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 11:26:37.117836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:37.148593       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 11:26:37.149503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:37.208715       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 11:26:37.209200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:37.343368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 11:26:37.344275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:26:37.393068       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 11:26:37.393338       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1028 11:26:39.903930       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1028 11:35:56.901211       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 946938ec-9c81-4b74-88bb-1468a578aa88(default/busybox-7dff88458-cvthb) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-cvthb"
	E1028 11:35:56.908277       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 946938ec-9c81-4b74-88bb-1468a578aa88(default/busybox-7dff88458-cvthb) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-cvthb"
	I1028 11:35:56.909400       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-cvthb" node="ha-201400-m02"
	
	
	==> kubelet <==
	Oct 28 11:49:39 ha-201400 kubelet[2332]: E1028 11:49:39.985553    2332 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:49:39 ha-201400 kubelet[2332]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:49:39 ha-201400 kubelet[2332]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:49:39 ha-201400 kubelet[2332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:49:39 ha-201400 kubelet[2332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:50:39 ha-201400 kubelet[2332]: E1028 11:50:39.983968    2332 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:50:39 ha-201400 kubelet[2332]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:50:39 ha-201400 kubelet[2332]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:50:39 ha-201400 kubelet[2332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:50:39 ha-201400 kubelet[2332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:51:39 ha-201400 kubelet[2332]: E1028 11:51:39.987320    2332 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:51:39 ha-201400 kubelet[2332]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:51:39 ha-201400 kubelet[2332]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:51:39 ha-201400 kubelet[2332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:51:39 ha-201400 kubelet[2332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:52:39 ha-201400 kubelet[2332]: E1028 11:52:39.982914    2332 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:52:39 ha-201400 kubelet[2332]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:52:39 ha-201400 kubelet[2332]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:52:39 ha-201400 kubelet[2332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:52:39 ha-201400 kubelet[2332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:53:39 ha-201400 kubelet[2332]: E1028 11:53:39.988604    2332 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:53:39 ha-201400 kubelet[2332]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:53:39 ha-201400 kubelet[2332]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:53:39 ha-201400 kubelet[2332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:53:39 ha-201400 kubelet[2332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-201400 -n ha-201400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-201400 -n ha-201400: (12.7963251s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-201400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (670.87s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (230.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-071500 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E1028 12:26:39.720456    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-071500 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: exit status 90 (3m37.9552051s)

                                                
                                                
-- stdout --
	* [multinode-071500] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "multinode-071500" primary control-plane node in "multinode-071500" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:25:08.752100   10928 out.go:345] Setting OutFile to fd 1460 ...
	I1028 12:25:08.843895   10928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:25:08.843992   10928 out.go:358] Setting ErrFile to fd 1552...
	I1028 12:25:08.843992   10928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:25:08.872437   10928 out.go:352] Setting JSON to false
	I1028 12:25:08.876910   10928 start.go:129] hostinfo: {"hostname":"minikube6","uptime":166133,"bootTime":1729952174,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W1028 12:25:08.877086   10928 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 12:25:08.884715   10928 out.go:177] * [multinode-071500] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1028 12:25:08.890545   10928 notify.go:220] Checking for updates...
	I1028 12:25:08.893297   10928 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 12:25:08.895956   10928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:25:08.898940   10928 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I1028 12:25:08.901173   10928 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:25:08.904150   10928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:25:08.908298   10928 config.go:182] Loaded profile config "ha-201400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:25:08.908735   10928 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:25:14.486625   10928 out.go:177] * Using the hyperv driver based on user configuration
	I1028 12:25:14.489881   10928 start.go:297] selected driver: hyperv
	I1028 12:25:14.489881   10928 start.go:901] validating driver "hyperv" against <nil>
	I1028 12:25:14.489881   10928 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:25:14.542858   10928 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 12:25:14.544427   10928 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:25:14.544511   10928 cni.go:84] Creating CNI manager for ""
	I1028 12:25:14.544627   10928 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 12:25:14.544627   10928 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 12:25:14.544858   10928 start.go:340] cluster config:
	{Name:multinode-071500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-071500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:25:14.545160   10928 iso.go:125] acquiring lock: {Name:mk92685f18db3b9b8a4c28d2a00efac35439147d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:25:14.549667   10928 out.go:177] * Starting "multinode-071500" primary control-plane node in "multinode-071500" cluster
	I1028 12:25:14.552396   10928 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 12:25:14.552949   10928 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1028 12:25:14.553143   10928 cache.go:56] Caching tarball of preloaded images
	I1028 12:25:14.553534   10928 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1028 12:25:14.553534   10928 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 12:25:14.553534   10928 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\config.json ...
	I1028 12:25:14.554094   10928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\config.json: {Name:mk0de2f0d7fdd6955352f6c8a7862716c942c60b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:25:14.554917   10928 start.go:360] acquireMachinesLock for multinode-071500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:25:14.555557   10928 start.go:364] duration metric: took 639.7µs to acquireMachinesLock for "multinode-071500"
	I1028 12:25:14.555557   10928 start.go:93] Provisioning new machine with config: &{Name:multinode-071500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:multinode-071500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 12:25:14.555557   10928 start.go:125] createHost starting for "" (driver="hyperv")
	I1028 12:25:14.558800   10928 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 12:25:14.558800   10928 start.go:159] libmachine.API.Create for "multinode-071500" (driver="hyperv")
	I1028 12:25:14.558800   10928 client.go:168] LocalClient.Create starting
	I1028 12:25:14.559804   10928 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I1028 12:25:14.559804   10928 main.go:141] libmachine: Decoding PEM data...
	I1028 12:25:14.559804   10928 main.go:141] libmachine: Parsing certificate...
	I1028 12:25:14.559804   10928 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I1028 12:25:14.560851   10928 main.go:141] libmachine: Decoding PEM data...
	I1028 12:25:14.560851   10928 main.go:141] libmachine: Parsing certificate...
	I1028 12:25:14.560851   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1028 12:25:16.706681   10928 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1028 12:25:16.706681   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:25:16.707032   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1028 12:25:18.519666   10928 main.go:141] libmachine: [stdout =====>] : False
	
	I1028 12:25:18.520533   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:25:18.520617   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1028 12:25:20.076911   10928 main.go:141] libmachine: [stdout =====>] : True
	
	I1028 12:25:20.076911   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:25:20.077088   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1028 12:25:23.726871   10928 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1028 12:25:23.726871   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:25:23.730531   10928 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 12:25:24.249781   10928 main.go:141] libmachine: Creating SSH key...
	I1028 12:25:24.361480   10928 main.go:141] libmachine: Creating VM...
	I1028 12:25:24.361480   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1028 12:25:27.300264   10928 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1028 12:25:27.300264   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:25:27.300264   10928 main.go:141] libmachine: Using switch "Default Switch"
	I1028 12:25:27.300264   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1028 12:25:29.157100   10928 main.go:141] libmachine: [stdout =====>] : True
	
	I1028 12:25:29.157100   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:25:29.157100   10928 main.go:141] libmachine: Creating VHD
	I1028 12:25:29.157100   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\fixed.vhd' -SizeBytes 10MB -Fixed
	I1028 12:25:32.932193   10928 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E517CC63-45CD-44C7-9654-78748B7534A6
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1028 12:25:32.932909   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:25:32.932909   10928 main.go:141] libmachine: Writing magic tar header
	I1028 12:25:32.932993   10928 main.go:141] libmachine: Writing SSH key tar header
	I1028 12:25:32.944296   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\disk.vhd' -VHDType Dynamic -DeleteSource
	I1028 12:25:36.176952   10928 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:25:36.176952   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:25:36.176952   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\disk.vhd' -SizeBytes 20000MB
	I1028 12:25:38.736425   10928 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:25:38.736425   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:25:38.737042   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-071500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1028 12:25:42.311466   10928 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-071500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1028 12:25:42.312273   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:25:42.312339   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-071500 -DynamicMemoryEnabled $false
	I1028 12:25:44.546265   10928 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:25:44.546265   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:25:44.546480   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-071500 -Count 2
	I1028 12:25:46.729184   10928 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:25:46.729184   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:25:46.729591   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-071500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\boot2docker.iso'
	I1028 12:25:49.334998   10928 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:25:49.334998   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:25:49.334998   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-071500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\disk.vhd'
	I1028 12:25:52.038806   10928 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:25:52.038806   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:25:52.038806   10928 main.go:141] libmachine: Starting VM...
	I1028 12:25:52.038806   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-071500
	I1028 12:25:55.231201   10928 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:25:55.231201   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:25:55.231201   10928 main.go:141] libmachine: Waiting for host to start...
	I1028 12:25:55.231201   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:25:57.578854   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:25:57.578854   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:25:57.578995   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:26:00.128407   10928 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:26:00.128407   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:01.128629   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:26:03.369472   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:26:03.369472   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:03.370523   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:26:06.010009   10928 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:26:06.010009   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:07.011093   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:26:09.278663   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:26:09.278819   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:09.278887   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:26:11.814962   10928 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:26:11.814962   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:12.815615   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:26:15.075241   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:26:15.075241   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:15.075762   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:26:17.690607   10928 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:26:17.690607   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:18.691597   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:26:21.023989   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:26:21.023989   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:21.023989   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:26:23.740811   10928 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:26:23.741049   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:23.741163   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:26:25.946539   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:26:25.946631   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:25.946631   10928 machine.go:93] provisionDockerMachine start ...
	I1028 12:26:25.946813   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:26:28.244016   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:26:28.244016   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:28.244115   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:26:30.924637   10928 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:26:30.924910   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:30.930846   10928 main.go:141] libmachine: Using SSH client type: native
	I1028 12:26:30.943154   10928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.249.25 22 <nil> <nil>}
	I1028 12:26:30.943154   10928 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:26:31.076883   10928 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:26:31.076883   10928 buildroot.go:166] provisioning hostname "multinode-071500"
	I1028 12:26:31.076883   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:26:33.315169   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:26:33.315668   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:33.315668   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:26:36.026565   10928 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:26:36.026565   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:36.033440   10928 main.go:141] libmachine: Using SSH client type: native
	I1028 12:26:36.034035   10928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.249.25 22 <nil> <nil>}
	I1028 12:26:36.034162   10928 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-071500 && echo "multinode-071500" | sudo tee /etc/hostname
	I1028 12:26:36.190932   10928 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-071500
	
	I1028 12:26:36.190932   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:26:38.372052   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:26:38.372608   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:38.372666   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:26:40.966950   10928 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:26:40.967056   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:40.972339   10928 main.go:141] libmachine: Using SSH client type: native
	I1028 12:26:40.972339   10928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.249.25 22 <nil> <nil>}
	I1028 12:26:40.972912   10928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-071500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-071500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-071500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:26:41.117791   10928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:26:41.117791   10928 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I1028 12:26:41.117791   10928 buildroot.go:174] setting up certificates
	I1028 12:26:41.117791   10928 provision.go:84] configureAuth start
	I1028 12:26:41.117791   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:26:43.314215   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:26:43.314708   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:43.314708   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:26:45.945048   10928 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:26:45.945048   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:45.945177   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:26:48.151584   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:26:48.151584   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:48.151669   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:26:50.790475   10928 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:26:50.791020   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:50.791020   10928 provision.go:143] copyHostCerts
	I1028 12:26:50.791295   10928 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I1028 12:26:50.791295   10928 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I1028 12:26:50.791295   10928 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I1028 12:26:50.791873   10928 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1028 12:26:50.793186   10928 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I1028 12:26:50.793508   10928 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I1028 12:26:50.793508   10928 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I1028 12:26:50.793747   10928 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1028 12:26:50.794944   10928 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I1028 12:26:50.795294   10928 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I1028 12:26:50.795294   10928 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I1028 12:26:50.795703   10928 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I1028 12:26:50.797266   10928 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-071500 san=[127.0.0.1 172.27.249.25 localhost minikube multinode-071500]
	I1028 12:26:50.922885   10928 provision.go:177] copyRemoteCerts
	I1028 12:26:50.934881   10928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:26:50.934881   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:26:53.131124   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:26:53.131124   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:53.131124   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:26:55.736861   10928 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:26:55.736861   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:55.738303   10928 sshutil.go:53] new ssh client: &{IP:172.27.249.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:26:55.852211   10928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9171736s)
	I1028 12:26:55.852211   10928 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1028 12:26:55.852524   10928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:26:55.901543   10928 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1028 12:26:55.901543   10928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1028 12:26:55.950195   10928 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1028 12:26:55.950356   10928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:26:55.999462   10928 provision.go:87] duration metric: took 14.881503s to configureAuth
	I1028 12:26:55.999462   10928 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:26:55.999462   10928 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:26:55.999462   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:26:58.205618   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:26:58.205618   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:26:58.205618   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:27:00.784719   10928 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:27:00.784719   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:00.791213   10928 main.go:141] libmachine: Using SSH client type: native
	I1028 12:27:00.791933   10928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.249.25 22 <nil> <nil>}
	I1028 12:27:00.792004   10928 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 12:27:00.924113   10928 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 12:27:00.924268   10928 buildroot.go:70] root file system type: tmpfs
	I1028 12:27:00.924502   10928 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 12:27:00.924535   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:27:03.166150   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:27:03.166150   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:03.167160   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:27:05.784411   10928 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:27:05.784411   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:05.790471   10928 main.go:141] libmachine: Using SSH client type: native
	I1028 12:27:05.791023   10928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.249.25 22 <nil> <nil>}
	I1028 12:27:05.791209   10928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 12:27:05.948601   10928 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 12:27:05.948601   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:27:08.169246   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:27:08.169246   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:08.170004   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:27:10.791936   10928 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:27:10.792019   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:10.796662   10928 main.go:141] libmachine: Using SSH client type: native
	I1028 12:27:10.796662   10928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.249.25 22 <nil> <nil>}
	I1028 12:27:10.796662   10928 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 12:27:13.062887   10928 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1028 12:27:13.062887   10928 machine.go:96] duration metric: took 47.1157244s to provisionDockerMachine
	I1028 12:27:13.062887   10928 client.go:171] duration metric: took 1m58.5027494s to LocalClient.Create
	I1028 12:27:13.062887   10928 start.go:167] duration metric: took 1m58.5027494s to libmachine.API.Create "multinode-071500"
	I1028 12:27:13.062887   10928 start.go:293] postStartSetup for "multinode-071500" (driver="hyperv")
	I1028 12:27:13.062887   10928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:27:13.073909   10928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:27:13.073909   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:27:15.299961   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:27:15.299961   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:15.299961   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:27:17.972121   10928 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:27:17.972121   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:17.972631   10928 sshutil.go:53] new ssh client: &{IP:172.27.249.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:27:18.084282   10928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.010316s)
	I1028 12:27:18.096888   10928 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:27:18.104303   10928 command_runner.go:130] > NAME=Buildroot
	I1028 12:27:18.104303   10928 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1028 12:27:18.104303   10928 command_runner.go:130] > ID=buildroot
	I1028 12:27:18.104303   10928 command_runner.go:130] > VERSION_ID=2023.02.9
	I1028 12:27:18.104303   10928 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1028 12:27:18.104303   10928 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:27:18.104303   10928 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I1028 12:27:18.104949   10928 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I1028 12:27:18.105787   10928 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> 96082.pem in /etc/ssl/certs
	I1028 12:27:18.105787   10928 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /etc/ssl/certs/96082.pem
	I1028 12:27:18.119183   10928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:27:18.139818   10928 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /etc/ssl/certs/96082.pem (1708 bytes)
	I1028 12:27:18.189852   10928 start.go:296] duration metric: took 5.1269074s for postStartSetup
	I1028 12:27:18.193115   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:27:20.382861   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:27:20.383437   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:20.383541   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:27:23.076484   10928 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:27:23.076828   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:23.076828   10928 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\config.json ...
	I1028 12:27:23.080298   10928 start.go:128] duration metric: took 2m8.5232907s to createHost
	I1028 12:27:23.080298   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:27:25.320620   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:27:25.321259   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:25.321259   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:27:28.034149   10928 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:27:28.034149   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:28.040749   10928 main.go:141] libmachine: Using SSH client type: native
	I1028 12:27:28.041392   10928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.249.25 22 <nil> <nil>}
	I1028 12:27:28.041392   10928 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:27:28.179263   10928 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730118448.193291368
	
	I1028 12:27:28.179263   10928 fix.go:216] guest clock: 1730118448.193291368
	I1028 12:27:28.179263   10928 fix.go:229] Guest: 2024-10-28 12:27:28.193291368 +0000 UTC Remote: 2024-10-28 12:27:23.0802988 +0000 UTC m=+134.428456601 (delta=5.112992568s)
	I1028 12:27:28.179417   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:27:30.412985   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:27:30.414271   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:30.414376   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:27:33.100842   10928 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:27:33.100842   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:33.107685   10928 main.go:141] libmachine: Using SSH client type: native
	I1028 12:27:33.108166   10928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.249.25 22 <nil> <nil>}
	I1028 12:27:33.108245   10928 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1730118448
	I1028 12:27:33.256425   10928 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 28 12:27:28 UTC 2024
	
	I1028 12:27:33.256425   10928 fix.go:236] clock set: Mon Oct 28 12:27:28 UTC 2024
	 (err=<nil>)
	I1028 12:27:33.256559   10928 start.go:83] releasing machines lock for "multinode-071500", held for 2m18.6993027s
	I1028 12:27:33.256781   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:27:35.502628   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:27:35.502628   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:35.502736   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:27:38.203432   10928 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:27:38.203494   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:38.207171   10928 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1028 12:27:38.207171   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:27:38.217815   10928 ssh_runner.go:195] Run: cat /version.json
	I1028 12:27:38.218655   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:27:40.519148   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:27:40.519148   10928 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:27:40.519288   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:40.519148   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:40.519288   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:27:40.519288   10928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:27:43.340562   10928 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:27:43.340639   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:43.341459   10928 sshutil.go:53] new ssh client: &{IP:172.27.249.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:27:43.370725   10928 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:27:43.370725   10928 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:27:43.371556   10928 sshutil.go:53] new ssh client: &{IP:172.27.249.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:27:43.430745   10928 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I1028 12:27:43.431750   10928 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.2245204s)
	W1028 12:27:43.431750   10928 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1028 12:27:43.464269   10928 command_runner.go:130] > {"iso_version": "v1.34.0-1729002252-19806", "kicbase_version": "v0.0.45-1728382586-19774", "minikube_version": "v1.34.0", "commit": "0b046a85be42f4631dd3453091a30d7fc1803a43"}
	I1028 12:27:43.464382   10928 ssh_runner.go:235] Completed: cat /version.json: (5.2463941s)
	I1028 12:27:43.475258   10928 ssh_runner.go:195] Run: systemctl --version
	I1028 12:27:43.485593   10928 command_runner.go:130] > systemd 252 (252)
	I1028 12:27:43.485657   10928 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1028 12:27:43.498012   10928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 12:27:43.505950   10928 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1028 12:27:43.506321   10928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:27:43.517568   10928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W1028 12:27:43.526347   10928 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1028 12:27:43.526347   10928 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1028 12:27:43.551803   10928 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1028 12:27:43.551888   10928 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:27:43.551970   10928 start.go:495] detecting cgroup driver to use...
	I1028 12:27:43.551970   10928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:27:43.590659   10928 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1028 12:27:43.602330   10928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1028 12:27:43.634482   10928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 12:27:43.654365   10928 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 12:27:43.666432   10928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 12:27:43.694434   10928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 12:27:43.728604   10928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 12:27:43.759230   10928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 12:27:43.790254   10928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:27:43.823765   10928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 12:27:43.856283   10928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 12:27:43.887297   10928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 12:27:43.921318   10928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:27:43.939248   10928 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:27:43.940309   10928 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:27:43.951780   10928 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:27:43.983227   10928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:27:44.011248   10928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:27:44.226720   10928 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 12:27:44.259870   10928 start.go:495] detecting cgroup driver to use...
	I1028 12:27:44.271454   10928 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 12:27:44.295730   10928 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1028 12:27:44.295810   10928 command_runner.go:130] > [Unit]
	I1028 12:27:44.295810   10928 command_runner.go:130] > Description=Docker Application Container Engine
	I1028 12:27:44.295810   10928 command_runner.go:130] > Documentation=https://docs.docker.com
	I1028 12:27:44.295810   10928 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1028 12:27:44.295810   10928 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1028 12:27:44.295810   10928 command_runner.go:130] > StartLimitBurst=3
	I1028 12:27:44.295810   10928 command_runner.go:130] > StartLimitIntervalSec=60
	I1028 12:27:44.295810   10928 command_runner.go:130] > [Service]
	I1028 12:27:44.295810   10928 command_runner.go:130] > Type=notify
	I1028 12:27:44.295810   10928 command_runner.go:130] > Restart=on-failure
	I1028 12:27:44.295810   10928 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1028 12:27:44.295810   10928 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1028 12:27:44.295810   10928 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1028 12:27:44.295810   10928 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1028 12:27:44.295810   10928 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1028 12:27:44.295810   10928 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1028 12:27:44.295810   10928 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1028 12:27:44.295810   10928 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1028 12:27:44.295810   10928 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1028 12:27:44.295810   10928 command_runner.go:130] > ExecStart=
	I1028 12:27:44.295810   10928 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1028 12:27:44.295810   10928 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1028 12:27:44.295810   10928 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1028 12:27:44.295810   10928 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1028 12:27:44.295810   10928 command_runner.go:130] > LimitNOFILE=infinity
	I1028 12:27:44.295810   10928 command_runner.go:130] > LimitNPROC=infinity
	I1028 12:27:44.295810   10928 command_runner.go:130] > LimitCORE=infinity
	I1028 12:27:44.295810   10928 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1028 12:27:44.295810   10928 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1028 12:27:44.295810   10928 command_runner.go:130] > TasksMax=infinity
	I1028 12:27:44.295810   10928 command_runner.go:130] > TimeoutStartSec=0
	I1028 12:27:44.295810   10928 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1028 12:27:44.295810   10928 command_runner.go:130] > Delegate=yes
	I1028 12:27:44.295810   10928 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1028 12:27:44.295810   10928 command_runner.go:130] > KillMode=process
	I1028 12:27:44.295810   10928 command_runner.go:130] > [Install]
	I1028 12:27:44.295810   10928 command_runner.go:130] > WantedBy=multi-user.target
	I1028 12:27:44.307874   10928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:27:44.342812   10928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:27:44.398418   10928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:27:44.436127   10928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 12:27:44.471122   10928 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1028 12:27:44.539436   10928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 12:27:44.564447   10928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:27:44.597225   10928 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1028 12:27:44.609925   10928 ssh_runner.go:195] Run: which cri-dockerd
	I1028 12:27:44.616742   10928 command_runner.go:130] > /usr/bin/cri-dockerd
	I1028 12:27:44.627544   10928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 12:27:44.646074   10928 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1028 12:27:44.690448   10928 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 12:27:44.910155   10928 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 12:27:45.103087   10928 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 12:27:45.103412   10928 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 12:27:45.156090   10928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:27:45.374691   10928 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 12:28:46.486118   10928 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I1028 12:28:46.486715   10928 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I1028 12:28:46.487171   10928 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1117896s)
	I1028 12:28:46.499293   10928 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1028 12:28:46.525682   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 systemd[1]: Starting Docker Application Container Engine...
	I1028 12:28:46.525682   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[656]: time="2024-10-28T12:27:11.429662894Z" level=info msg="Starting up"
	I1028 12:28:46.525682   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[656]: time="2024-10-28T12:27:11.432015694Z" level=info msg="containerd not running, starting managed containerd"
	I1028 12:28:46.525682   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[656]: time="2024-10-28T12:27:11.433299248Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	I1028 12:28:46.525682   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.467307385Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I1028 12:28:46.525821   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495293167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1028 12:28:46.525821   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495346369Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1028 12:28:46.525881   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495407871Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1028 12:28:46.525881   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495426072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1028 12:28:46.525881   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495627681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1028 12:28:46.525976   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495722385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1028 12:28:46.525976   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495913293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1028 12:28:46.525976   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.496014497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1028 12:28:46.525976   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.496037398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I1028 12:28:46.526095   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.496051099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1028 12:28:46.526095   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.496187904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.496672425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500328779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500433884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500628792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500728596Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500845801Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500994507Z" level=info msg="metadata content store policy set" policy=shared
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530426951Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530614559Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530643260Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530662861Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530679661Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530803767Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531304588Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531463994Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531618601Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531642402Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531663103Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531682104Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531697304Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531714705Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531732506Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531776608Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531794108Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1028 12:28:46.526147   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531807909Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1028 12:28:46.526730   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531830710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1028 12:28:46.526730   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531848511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1028 12:28:46.526730   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531879812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1028 12:28:46.526730   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531902813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1028 12:28:46.526730   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531921814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1028 12:28:46.526730   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531937814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1028 12:28:46.526730   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531970616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1028 12:28:46.526730   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531988817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1028 12:28:46.526883   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532004917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1028 12:28:46.526883   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532022418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1028 12:28:46.526883   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532036319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1028 12:28:46.526883   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532050919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1028 12:28:46.526883   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532065220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1028 12:28:46.526999   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532087321Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1028 12:28:46.526999   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532112122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1028 12:28:46.526999   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532130623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1028 12:28:46.526999   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532146723Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1028 12:28:46.527110   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532562241Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1028 12:28:46.527110   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532617343Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I1028 12:28:46.527110   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532635844Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1028 12:28:46.527110   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532651445Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I1028 12:28:46.527193   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532663545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1028 12:28:46.527193   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532678846Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1028 12:28:46.527269   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532691446Z" level=info msg="NRI interface is disabled by configuration."
	I1028 12:28:46.527269   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532946057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1028 12:28:46.527269   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.533030561Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1028 12:28:46.527269   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.533108864Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1028 12:28:46.527344   10928 command_runner.go:130] > Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.533133265Z" level=info msg="containerd successfully booted in 0.067467s"
	I1028 12:28:46.527344   10928 command_runner.go:130] > Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.499798085Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1028 12:28:46.527344   10928 command_runner.go:130] > Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.536093123Z" level=info msg="Loading containers: start."
	I1028 12:28:46.527501   10928 command_runner.go:130] > Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.711917285Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1028 12:28:46.527501   10928 command_runner.go:130] > Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.947802139Z" level=info msg="Loading containers: done."
	I1028 12:28:46.527501   10928 command_runner.go:130] > Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.971166243Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I1028 12:28:46.527501   10928 command_runner.go:130] > Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.971253447Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I1028 12:28:46.527904   10928 command_runner.go:130] > Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.971300749Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	I1028 12:28:46.527949   10928 command_runner.go:130] > Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.971593363Z" level=info msg="Daemon has completed initialization"
	I1028 12:28:46.527973   10928 command_runner.go:130] > Oct 28 12:27:13 multinode-071500 dockerd[656]: time="2024-10-28T12:27:13.074716016Z" level=info msg="API listen on /var/run/docker.sock"
	I1028 12:28:46.527973   10928 command_runner.go:130] > Oct 28 12:27:13 multinode-071500 systemd[1]: Started Docker Application Container Engine.
	I1028 12:28:46.527973   10928 command_runner.go:130] > Oct 28 12:27:13 multinode-071500 dockerd[656]: time="2024-10-28T12:27:13.077592648Z" level=info msg="API listen on [::]:2376"
	I1028 12:28:46.527973   10928 command_runner.go:130] > Oct 28 12:27:45 multinode-071500 systemd[1]: Stopping Docker Application Container Engine...
	I1028 12:28:46.527973   10928 command_runner.go:130] > Oct 28 12:27:45 multinode-071500 dockerd[656]: time="2024-10-28T12:27:45.416610229Z" level=info msg="Processing signal 'terminated'"
	I1028 12:28:46.528059   10928 command_runner.go:130] > Oct 28 12:27:45 multinode-071500 dockerd[656]: time="2024-10-28T12:27:45.419032248Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1028 12:28:46.528059   10928 command_runner.go:130] > Oct 28 12:27:45 multinode-071500 dockerd[656]: time="2024-10-28T12:27:45.419230249Z" level=info msg="Daemon shutdown complete"
	I1028 12:28:46.528059   10928 command_runner.go:130] > Oct 28 12:27:45 multinode-071500 dockerd[656]: time="2024-10-28T12:27:45.419330050Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1028 12:28:46.528059   10928 command_runner.go:130] > Oct 28 12:27:45 multinode-071500 dockerd[656]: time="2024-10-28T12:27:45.419355950Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1028 12:28:46.528059   10928 command_runner.go:130] > Oct 28 12:27:46 multinode-071500 systemd[1]: docker.service: Deactivated successfully.
	I1028 12:28:46.528059   10928 command_runner.go:130] > Oct 28 12:27:46 multinode-071500 systemd[1]: Stopped Docker Application Container Engine.
	I1028 12:28:46.528180   10928 command_runner.go:130] > Oct 28 12:27:46 multinode-071500 systemd[1]: Starting Docker Application Container Engine...
	I1028 12:28:46.528180   10928 command_runner.go:130] > Oct 28 12:27:46 multinode-071500 dockerd[1064]: time="2024-10-28T12:27:46.475097317Z" level=info msg="Starting up"
	I1028 12:28:46.528180   10928 command_runner.go:130] > Oct 28 12:28:46 multinode-071500 dockerd[1064]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I1028 12:28:46.528304   10928 command_runner.go:130] > Oct 28 12:28:46 multinode-071500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I1028 12:28:46.528304   10928 command_runner.go:130] > Oct 28 12:28:46 multinode-071500 systemd[1]: docker.service: Failed with result 'exit-code'.
	I1028 12:28:46.528304   10928 command_runner.go:130] > Oct 28 12:28:46 multinode-071500 systemd[1]: Failed to start Docker Application Container Engine.
	I1028 12:28:46.537659   10928 out.go:201] 
	W1028 12:28:46.542148   10928 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 28 12:27:11 multinode-071500 systemd[1]: Starting Docker Application Container Engine...
	Oct 28 12:27:11 multinode-071500 dockerd[656]: time="2024-10-28T12:27:11.429662894Z" level=info msg="Starting up"
	Oct 28 12:27:11 multinode-071500 dockerd[656]: time="2024-10-28T12:27:11.432015694Z" level=info msg="containerd not running, starting managed containerd"
	Oct 28 12:27:11 multinode-071500 dockerd[656]: time="2024-10-28T12:27:11.433299248Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.467307385Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495293167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495346369Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495407871Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495426072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495627681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495722385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495913293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.496014497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.496037398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.496051099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.496187904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.496672425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500328779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500433884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500628792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500728596Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500845801Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500994507Z" level=info msg="metadata content store policy set" policy=shared
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530426951Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530614559Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530643260Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530662861Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530679661Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530803767Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531304588Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531463994Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531618601Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531642402Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531663103Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531682104Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531697304Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531714705Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531732506Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531776608Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531794108Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531807909Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531830710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531848511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531879812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531902813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531921814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531937814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531970616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531988817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532004917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532022418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532036319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532050919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532065220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532087321Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532112122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532130623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532146723Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532562241Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532617343Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532635844Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532651445Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532663545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532678846Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532691446Z" level=info msg="NRI interface is disabled by configuration."
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532946057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.533030561Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.533108864Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.533133265Z" level=info msg="containerd successfully booted in 0.067467s"
	Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.499798085Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.536093123Z" level=info msg="Loading containers: start."
	Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.711917285Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.947802139Z" level=info msg="Loading containers: done."
	Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.971166243Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.971253447Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.971300749Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.971593363Z" level=info msg="Daemon has completed initialization"
	Oct 28 12:27:13 multinode-071500 dockerd[656]: time="2024-10-28T12:27:13.074716016Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 28 12:27:13 multinode-071500 systemd[1]: Started Docker Application Container Engine.
	Oct 28 12:27:13 multinode-071500 dockerd[656]: time="2024-10-28T12:27:13.077592648Z" level=info msg="API listen on [::]:2376"
	Oct 28 12:27:45 multinode-071500 systemd[1]: Stopping Docker Application Container Engine...
	Oct 28 12:27:45 multinode-071500 dockerd[656]: time="2024-10-28T12:27:45.416610229Z" level=info msg="Processing signal 'terminated'"
	Oct 28 12:27:45 multinode-071500 dockerd[656]: time="2024-10-28T12:27:45.419032248Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 28 12:27:45 multinode-071500 dockerd[656]: time="2024-10-28T12:27:45.419230249Z" level=info msg="Daemon shutdown complete"
	Oct 28 12:27:45 multinode-071500 dockerd[656]: time="2024-10-28T12:27:45.419330050Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 28 12:27:45 multinode-071500 dockerd[656]: time="2024-10-28T12:27:45.419355950Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 28 12:27:46 multinode-071500 systemd[1]: docker.service: Deactivated successfully.
	Oct 28 12:27:46 multinode-071500 systemd[1]: Stopped Docker Application Container Engine.
	Oct 28 12:27:46 multinode-071500 systemd[1]: Starting Docker Application Container Engine...
	Oct 28 12:27:46 multinode-071500 dockerd[1064]: time="2024-10-28T12:27:46.475097317Z" level=info msg="Starting up"
	Oct 28 12:28:46 multinode-071500 dockerd[1064]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 28 12:28:46 multinode-071500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 28 12:28:46 multinode-071500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 28 12:28:46 multinode-071500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 28 12:27:11 multinode-071500 systemd[1]: Starting Docker Application Container Engine...
	Oct 28 12:27:11 multinode-071500 dockerd[656]: time="2024-10-28T12:27:11.429662894Z" level=info msg="Starting up"
	Oct 28 12:27:11 multinode-071500 dockerd[656]: time="2024-10-28T12:27:11.432015694Z" level=info msg="containerd not running, starting managed containerd"
	Oct 28 12:27:11 multinode-071500 dockerd[656]: time="2024-10-28T12:27:11.433299248Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.467307385Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495293167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495346369Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495407871Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495426072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495627681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495722385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.495913293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.496014497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.496037398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.496051099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.496187904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.496672425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500328779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500433884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500628792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500728596Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500845801Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.500994507Z" level=info msg="metadata content store policy set" policy=shared
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530426951Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530614559Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530643260Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530662861Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530679661Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.530803767Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531304588Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531463994Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531618601Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531642402Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531663103Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531682104Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531697304Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531714705Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531732506Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531776608Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531794108Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531807909Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531830710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531848511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531879812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531902813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531921814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531937814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531970616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.531988817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532004917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532022418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532036319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532050919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532065220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532087321Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532112122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532130623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532146723Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532562241Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532617343Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532635844Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532651445Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532663545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532678846Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532691446Z" level=info msg="NRI interface is disabled by configuration."
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.532946057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.533030561Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.533108864Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 28 12:27:11 multinode-071500 dockerd[662]: time="2024-10-28T12:27:11.533133265Z" level=info msg="containerd successfully booted in 0.067467s"
	Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.499798085Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.536093123Z" level=info msg="Loading containers: start."
	Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.711917285Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.947802139Z" level=info msg="Loading containers: done."
	Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.971166243Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.971253447Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.971300749Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 28 12:27:12 multinode-071500 dockerd[656]: time="2024-10-28T12:27:12.971593363Z" level=info msg="Daemon has completed initialization"
	Oct 28 12:27:13 multinode-071500 dockerd[656]: time="2024-10-28T12:27:13.074716016Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 28 12:27:13 multinode-071500 systemd[1]: Started Docker Application Container Engine.
	Oct 28 12:27:13 multinode-071500 dockerd[656]: time="2024-10-28T12:27:13.077592648Z" level=info msg="API listen on [::]:2376"
	Oct 28 12:27:45 multinode-071500 systemd[1]: Stopping Docker Application Container Engine...
	Oct 28 12:27:45 multinode-071500 dockerd[656]: time="2024-10-28T12:27:45.416610229Z" level=info msg="Processing signal 'terminated'"
	Oct 28 12:27:45 multinode-071500 dockerd[656]: time="2024-10-28T12:27:45.419032248Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 28 12:27:45 multinode-071500 dockerd[656]: time="2024-10-28T12:27:45.419230249Z" level=info msg="Daemon shutdown complete"
	Oct 28 12:27:45 multinode-071500 dockerd[656]: time="2024-10-28T12:27:45.419330050Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 28 12:27:45 multinode-071500 dockerd[656]: time="2024-10-28T12:27:45.419355950Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 28 12:27:46 multinode-071500 systemd[1]: docker.service: Deactivated successfully.
	Oct 28 12:27:46 multinode-071500 systemd[1]: Stopped Docker Application Container Engine.
	Oct 28 12:27:46 multinode-071500 systemd[1]: Starting Docker Application Container Engine...
	Oct 28 12:27:46 multinode-071500 dockerd[1064]: time="2024-10-28T12:27:46.475097317Z" level=info msg="Starting up"
	Oct 28 12:28:46 multinode-071500 dockerd[1064]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 28 12:28:46 multinode-071500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 28 12:28:46 multinode-071500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 28 12:28:46 multinode-071500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1028 12:28:46.542148   10928 out.go:270] * 
	* 
	W1028 12:28:46.544178   10928 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:28:46.548742   10928 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-071500 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500: exit status 6 (12.6245309s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:28:59.478669   11092 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-071500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (230.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (118.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (397.243ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-071500" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- rollout status deployment/busybox: exit status 1 (390.2471ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (379.9134ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 12:29:00.700214    9608 retry.go:31] will retry after 557.225198ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (376.0031ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 12:29:01.634363    9608 retry.go:31] will retry after 2.165988407s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (378.4814ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 12:29:04.179739    9608 retry.go:31] will retry after 1.474783791s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (375.5346ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 12:29:06.031593    9608 retry.go:31] will retry after 4.229371322s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (379.8732ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 12:29:10.641889    9608 retry.go:31] will retry after 2.681238719s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (370.6026ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 12:29:13.696017    9608 retry.go:31] will retry after 5.319909869s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (408.8308ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 12:29:19.426057    9608 retry.go:31] will retry after 9.030043944s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (370.6496ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 12:29:28.828318    9608 retry.go:31] will retry after 17.467050023s: failed to retrieve Pod IPs (may be temporary): exit status 1
E1028 12:29:45.568937    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (390.3604ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 12:29:46.686546    9608 retry.go:31] will retry after 19.56426697s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (396.75ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 12:30:06.649047    9608 retry.go:31] will retry after 37.497197199s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (373.675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (377.3406ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- exec  -- nslookup kubernetes.io: exit status 1 (399.7316ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- exec  -- nslookup kubernetes.default: exit status 1 (378.6689ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (379.8028ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500: exit status 6 (12.3499064s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:30:58.343364    4612 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-071500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (118.88s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (12.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-071500 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (370.8697ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500
E1028 12:31:08.658237    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500: exit status 6 (12.2090602s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:31:10.916144    5440 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-071500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (12.58s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-071500 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-071500 -v 3 --alsologtostderr: exit status 103 (7.5238984s)

                                                
                                                
-- stdout --
	* The control-plane node multinode-071500 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p multinode-071500"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:31:11.124730   10236 out.go:345] Setting OutFile to fd 1668 ...
	I1028 12:31:11.207039   10236 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:31:11.207039   10236 out.go:358] Setting ErrFile to fd 1608...
	I1028 12:31:11.207039   10236 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:31:11.224040   10236 mustload.go:65] Loading cluster: multinode-071500
	I1028 12:31:11.225039   10236 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:31:11.226047   10236 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:31:13.408829   10236 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:31:13.409432   10236 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:31:13.409432   10236 host.go:66] Checking if "multinode-071500" exists ...
	I1028 12:31:13.410297   10236 api_server.go:166] Checking apiserver status ...
	I1028 12:31:13.421709   10236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:31:13.421709   10236 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:31:15.600760   10236 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:31:15.600760   10236 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:31:15.600948   10236 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:31:18.195974   10236 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:31:18.196236   10236 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:31:18.196236   10236 sshutil.go:53] new ssh client: &{IP:172.27.249.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:31:18.306548   10236 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.8845885s)
	W1028 12:31:18.306633   10236 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:31:18.310514   10236 out.go:177] * The control-plane node multinode-071500 apiserver is not running: (state=Stopped)
	I1028 12:31:18.313213   10236 out.go:177]   To start a cluster, run: "minikube start -p multinode-071500"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-windows-amd64.exe node add -p multinode-071500 -v 3 --alsologtostderr" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500: exit status 6 (12.3273187s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:31:30.772548    8572 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-071500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/AddNode (19.85s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (12.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-071500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-071500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (132.0126ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-071500

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-071500 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-071500 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500
E1028 12:31:39.723324    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500: exit status 6 (12.3042701s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:31:43.215051   10132 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-071500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (12.45s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (24.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (12.2744408s)
multinode_test.go:166: expected profile "multinode-071500" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-201400\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-201400\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-201400\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"172.27.255.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.27.248.250\",\"Port\":8443,\"Kuber
netesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"172.27.250.174\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"172.27.254.230\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"172.27.250.248\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"
kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube6:/minikube-host\",\"Mount9PVersion\":\"9p2000.L
\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false},{\"Name\":\"multinode-071500\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-071500\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"Co
ntainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"multinode-071500\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\
"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.27.249.25\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube6:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"Moun
tMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500: exit status 6 (12.1004611s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:32:07.601387    5860 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-071500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/ProfileList (24.38s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (24.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-071500 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-071500 status --output json --alsologtostderr: exit status 6 (12.1820985s)

                                                
                                                
-- stdout --
	{"Name":"multinode-071500","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:32:07.792591   10028 out.go:345] Setting OutFile to fd 1704 ...
	I1028 12:32:07.869469   10028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:32:07.870471   10028 out.go:358] Setting ErrFile to fd 1976...
	I1028 12:32:07.870471   10028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:32:07.885461   10028 out.go:352] Setting JSON to true
	I1028 12:32:07.885461   10028 mustload.go:65] Loading cluster: multinode-071500
	I1028 12:32:07.885461   10028 notify.go:220] Checking for updates...
	I1028 12:32:07.886467   10028 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:32:07.886467   10028 status.go:174] checking status of multinode-071500 ...
	I1028 12:32:07.887474   10028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:32:10.068510   10028 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:32:10.068510   10028 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:32:10.068510   10028 status.go:371] multinode-071500 host status = "Running" (err=<nil>)
	I1028 12:32:10.068616   10028 host.go:66] Checking if "multinode-071500" exists ...
	I1028 12:32:10.069495   10028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:32:12.246492   10028 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:32:12.246492   10028 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:32:12.247316   10028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:32:14.831266   10028 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:32:14.831266   10028 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:32:14.832263   10028 host.go:66] Checking if "multinode-071500" exists ...
	I1028 12:32:14.844311   10028 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 12:32:14.844311   10028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:32:17.079018   10028 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:32:17.079018   10028 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:32:17.079018   10028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:32:19.618354   10028 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:32:19.618488   10028 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:32:19.618747   10028 sshutil.go:53] new ssh client: &{IP:172.27.249.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:32:19.713630   10028 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8692123s)
	I1028 12:32:19.732315   10028 ssh_runner.go:195] Run: systemctl --version
	I1028 12:32:19.755019   10028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E1028 12:32:19.781502   10028 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 12:32:19.781502   10028 api_server.go:166] Checking apiserver status ...
	I1028 12:32:19.795546   10028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1028 12:32:19.819365   10028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:32:19.819365   10028 status.go:463] multinode-071500 apiserver status = Stopped (err=<nil>)
	I1028 12:32:19.819365   10028 status.go:176] multinode-071500 status: &{Name:multinode-071500 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:186: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-071500 status --output json --alsologtostderr" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500: exit status 6 (12.1525301s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:32:31.941338    4280 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-071500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/CopyFile (24.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (24.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-071500 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-071500 node stop m03: exit status 85 (313.77ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_b919dc3b020968087ec77f25afbb061db3e8211c_0.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-071500 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-071500 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-071500 status: exit status 6 (12.213528s)

                                                
                                                
-- stdout --
	multinode-071500
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:32:44.469708    7572 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
multinode_test.go:257: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-071500 status" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500: exit status 6 (12.2614354s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:32:56.725476   11620 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-071500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/StopNode (24.79s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (80.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-071500 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-071500 node start m03 -v=7 --alsologtostderr: exit status 85 (311.7507ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:32:56.929169     196 out.go:345] Setting OutFile to fd 1564 ...
	I1028 12:32:57.007237     196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:32:57.007237     196 out.go:358] Setting ErrFile to fd 1304...
	I1028 12:32:57.007237     196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:32:57.028951     196 mustload.go:65] Loading cluster: multinode-071500
	I1028 12:32:57.028951     196 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:32:57.047129     196 out.go:201] 
	W1028 12:32:57.050285     196 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1028 12:32:57.050437     196 out.go:270] * 
	* 
	W1028 12:32:57.074684     196 out.go:293] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_2.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_2.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:32:57.077238     196 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1028 12:32:56.929169     196 out.go:345] Setting OutFile to fd 1564 ...
I1028 12:32:57.007237     196 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 12:32:57.007237     196 out.go:358] Setting ErrFile to fd 1304...
I1028 12:32:57.007237     196 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 12:32:57.028951     196 mustload.go:65] Loading cluster: multinode-071500
I1028 12:32:57.028951     196 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 12:32:57.047129     196 out.go:201] 
W1028 12:32:57.050285     196 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1028 12:32:57.050437     196 out.go:270] * 
* 
W1028 12:32:57.074684     196 out.go:293] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_2.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_2.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1028 12:32:57.077238     196 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-windows-amd64.exe -p multinode-071500 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-071500 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-071500 status -v=7 --alsologtostderr: exit status 6 (12.5724576s)

                                                
                                                
-- stdout --
	multinode-071500
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:32:57.237152   11628 out.go:345] Setting OutFile to fd 1784 ...
	I1028 12:32:57.312977   11628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:32:57.312977   11628 out.go:358] Setting ErrFile to fd 1584...
	I1028 12:32:57.312977   11628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:32:57.335600   11628 out.go:352] Setting JSON to false
	I1028 12:32:57.335600   11628 mustload.go:65] Loading cluster: multinode-071500
	I1028 12:32:57.335600   11628 notify.go:220] Checking for updates...
	I1028 12:32:57.335600   11628 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:32:57.335600   11628 status.go:174] checking status of multinode-071500 ...
	I1028 12:32:57.337374   11628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:32:59.532894   11628 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:32:59.532938   11628 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:32:59.533028   11628 status.go:371] multinode-071500 host status = "Running" (err=<nil>)
	I1028 12:32:59.533028   11628 host.go:66] Checking if "multinode-071500" exists ...
	I1028 12:32:59.534126   11628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:33:01.707908   11628 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:33:01.708745   11628 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:01.708745   11628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:33:04.453336   11628 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:33:04.453336   11628 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:04.453336   11628 host.go:66] Checking if "multinode-071500" exists ...
	I1028 12:33:04.465876   11628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 12:33:04.465876   11628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:33:06.676115   11628 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:33:06.677013   11628 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:06.677013   11628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:33:09.461517   11628 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:33:09.461517   11628 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:09.461890   11628 sshutil.go:53] new ssh client: &{IP:172.27.249.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:33:09.564026   11628 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0980927s)
	I1028 12:33:09.575471   11628 ssh_runner.go:195] Run: systemctl --version
	I1028 12:33:09.594372   11628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E1028 12:33:09.620573   11628 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 12:33:09.620659   11628 api_server.go:166] Checking apiserver status ...
	I1028 12:33:09.631042   11628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1028 12:33:09.654075   11628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:33:09.654178   11628 status.go:463] multinode-071500 apiserver status = Stopped (err=<nil>)
	I1028 12:33:09.654178   11628 status.go:176] multinode-071500 status: &{Name:multinode-071500 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 12:33:09.684804    9608 retry.go:31] will retry after 608.63189ms: exit status 6
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-071500 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-071500 status -v=7 --alsologtostderr: exit status 6 (12.2498281s)

                                                
                                                
-- stdout --
	multinode-071500
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:33:10.415240   10380 out.go:345] Setting OutFile to fd 1424 ...
	I1028 12:33:10.495481   10380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:33:10.495481   10380 out.go:358] Setting ErrFile to fd 1920...
	I1028 12:33:10.495481   10380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:33:10.510410   10380 out.go:352] Setting JSON to false
	I1028 12:33:10.510410   10380 mustload.go:65] Loading cluster: multinode-071500
	I1028 12:33:10.510410   10380 notify.go:220] Checking for updates...
	I1028 12:33:10.512152   10380 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:33:10.512297   10380 status.go:174] checking status of multinode-071500 ...
	I1028 12:33:10.513482   10380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:33:12.698305   10380 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:33:12.698305   10380 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:12.698386   10380 status.go:371] multinode-071500 host status = "Running" (err=<nil>)
	I1028 12:33:12.698386   10380 host.go:66] Checking if "multinode-071500" exists ...
	I1028 12:33:12.699361   10380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:33:14.899450   10380 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:33:14.900019   10380 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:14.900019   10380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:33:17.509554   10380 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:33:17.510086   10380 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:17.510086   10380 host.go:66] Checking if "multinode-071500" exists ...
	I1028 12:33:17.522652   10380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 12:33:17.522652   10380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:33:19.680665   10380 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:33:19.680665   10380 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:19.680665   10380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:33:22.309749   10380 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:33:22.310520   10380 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:22.310520   10380 sshutil.go:53] new ssh client: &{IP:172.27.249.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:33:22.413854   10380 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8910735s)
	I1028 12:33:22.425176   10380 ssh_runner.go:195] Run: systemctl --version
	I1028 12:33:22.447436   10380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E1028 12:33:22.473178   10380 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 12:33:22.473265   10380 api_server.go:166] Checking apiserver status ...
	I1028 12:33:22.485658   10380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1028 12:33:22.509711   10380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:33:22.509711   10380 status.go:463] multinode-071500 apiserver status = Stopped (err=<nil>)
	I1028 12:33:22.509711   10380 status.go:176] multinode-071500 status: &{Name:multinode-071500 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 12:33:22.544226    9608 retry.go:31] will retry after 1.261171732s: exit status 6
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-071500 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-071500 status -v=7 --alsologtostderr: exit status 6 (12.6256874s)

                                                
                                                
-- stdout --
	multinode-071500
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:33:23.925663    9788 out.go:345] Setting OutFile to fd 720 ...
	I1028 12:33:24.000664    9788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:33:24.000664    9788 out.go:358] Setting ErrFile to fd 1748...
	I1028 12:33:24.000664    9788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:33:24.018863    9788 out.go:352] Setting JSON to false
	I1028 12:33:24.019015    9788 mustload.go:65] Loading cluster: multinode-071500
	I1028 12:33:24.019122    9788 notify.go:220] Checking for updates...
	I1028 12:33:24.019980    9788 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:33:24.020054    9788 status.go:174] checking status of multinode-071500 ...
	I1028 12:33:24.021160    9788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:33:26.268790    9788 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:33:26.268790    9788 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:26.268790    9788 status.go:371] multinode-071500 host status = "Running" (err=<nil>)
	I1028 12:33:26.268790    9788 host.go:66] Checking if "multinode-071500" exists ...
	I1028 12:33:26.269619    9788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:33:28.566655    9788 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:33:28.566655    9788 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:28.566655    9788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:33:31.294918    9788 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:33:31.294918    9788 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:31.295630    9788 host.go:66] Checking if "multinode-071500" exists ...
	I1028 12:33:31.307666    9788 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 12:33:31.307666    9788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:33:33.558244    9788 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:33:33.558244    9788 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:33.558244    9788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:33:36.210280    9788 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:33:36.210530    9788 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:36.210530    9788 sshutil.go:53] new ssh client: &{IP:172.27.249.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:33:36.303274    9788 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9955512s)
	I1028 12:33:36.315012    9788 ssh_runner.go:195] Run: systemctl --version
	I1028 12:33:36.336015    9788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E1028 12:33:36.364935    9788 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 12:33:36.364990    9788 api_server.go:166] Checking apiserver status ...
	I1028 12:33:36.376649    9788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1028 12:33:36.402039    9788 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:33:36.402112    9788 status.go:463] multinode-071500 apiserver status = Stopped (err=<nil>)
	I1028 12:33:36.402112    9788 status.go:176] multinode-071500 status: &{Name:multinode-071500 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 12:33:36.432306    9608 retry.go:31] will retry after 2.348328106s: exit status 6
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-071500 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-071500 status -v=7 --alsologtostderr: exit status 6 (12.3208009s)

                                                
                                                
-- stdout --
	multinode-071500
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:33:38.900822    4028 out.go:345] Setting OutFile to fd 1328 ...
	I1028 12:33:38.981717    4028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:33:38.981717    4028 out.go:358] Setting ErrFile to fd 1296...
	I1028 12:33:38.981717    4028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:33:38.999379    4028 out.go:352] Setting JSON to false
	I1028 12:33:38.999379    4028 mustload.go:65] Loading cluster: multinode-071500
	I1028 12:33:38.999379    4028 notify.go:220] Checking for updates...
	I1028 12:33:39.000381    4028 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:33:39.000381    4028 status.go:174] checking status of multinode-071500 ...
	I1028 12:33:39.001292    4028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:33:41.182745    4028 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:33:41.182897    4028 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:41.183132    4028 status.go:371] multinode-071500 host status = "Running" (err=<nil>)
	I1028 12:33:41.183207    4028 host.go:66] Checking if "multinode-071500" exists ...
	I1028 12:33:41.184180    4028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:33:43.438456    4028 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:33:43.438547    4028 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:43.438646    4028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:33:46.072956    4028 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:33:46.072956    4028 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:46.073718    4028 host.go:66] Checking if "multinode-071500" exists ...
	I1028 12:33:46.085550    4028 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 12:33:46.085550    4028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:33:48.262525    4028 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:33:48.262525    4028 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:48.263037    4028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:33:50.869715    4028 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:33:50.870217    4028 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:50.870217    4028 sshutil.go:53] new ssh client: &{IP:172.27.249.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:33:50.975268    4028 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8896634s)
	I1028 12:33:50.986276    4028 ssh_runner.go:195] Run: systemctl --version
	I1028 12:33:51.008134    4028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E1028 12:33:51.034197    4028 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 12:33:51.034281    4028 api_server.go:166] Checking apiserver status ...
	I1028 12:33:51.046954    4028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1028 12:33:51.070470    4028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:33:51.070590    4028 status.go:463] multinode-071500 apiserver status = Stopped (err=<nil>)
	I1028 12:33:51.070590    4028 status.go:176] multinode-071500 status: &{Name:multinode-071500 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 12:33:51.103339    9608 retry.go:31] will retry after 1.940570189s: exit status 6
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-071500 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-071500 status -v=7 --alsologtostderr: exit status 6 (12.154215s)

                                                
                                                
-- stdout --
	multinode-071500
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:33:53.167616    7940 out.go:345] Setting OutFile to fd 1524 ...
	I1028 12:33:53.250655    7940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:33:53.250655    7940 out.go:358] Setting ErrFile to fd 1460...
	I1028 12:33:53.250655    7940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:33:53.268113    7940 out.go:352] Setting JSON to false
	I1028 12:33:53.268113    7940 mustload.go:65] Loading cluster: multinode-071500
	I1028 12:33:53.268113    7940 notify.go:220] Checking for updates...
	I1028 12:33:53.269040    7940 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:33:53.269040    7940 status.go:174] checking status of multinode-071500 ...
	I1028 12:33:53.270105    7940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:33:55.430742    7940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:33:55.431815    7940 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:55.431815    7940 status.go:371] multinode-071500 host status = "Running" (err=<nil>)
	I1028 12:33:55.431815    7940 host.go:66] Checking if "multinode-071500" exists ...
	I1028 12:33:55.432847    7940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:33:57.621056    7940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:33:57.621141    7940 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:33:57.621219    7940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:34:00.215028    7940 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:34:00.215028    7940 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:34:00.215028    7940 host.go:66] Checking if "multinode-071500" exists ...
	I1028 12:34:00.227111    7940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 12:34:00.227111    7940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:34:02.412115    7940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:34:02.412115    7940 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:34:02.412115    7940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:34:04.970955    7940 main.go:141] libmachine: [stdout =====>] : 172.27.249.25
	
	I1028 12:34:04.972052    7940 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:34:04.972681    7940 sshutil.go:53] new ssh client: &{IP:172.27.249.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:34:05.075647    7940 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8484816s)
	I1028 12:34:05.087714    7940 ssh_runner.go:195] Run: systemctl --version
	I1028 12:34:05.108474    7940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E1028 12:34:05.134121    7940 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 12:34:05.134154    7940 api_server.go:166] Checking apiserver status ...
	I1028 12:34:05.146125    7940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1028 12:34:05.169575    7940 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:34:05.169575    7940 status.go:463] multinode-071500 apiserver status = Stopped (err=<nil>)
	I1028 12:34:05.169575    7940 status.go:176] multinode-071500 status: &{Name:multinode-071500 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-071500 status -v=7 --alsologtostderr" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500: exit status 6 (12.1125039s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:34:17.243021    3864 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-071500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/StartAfterStop (80.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (262.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-071500
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-071500
E1028 12:34:45.573094    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-071500: (38.9763745s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-071500 --wait=true -v=8 --alsologtostderr
E1028 12:36:39.727478    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-071500 --wait=true -v=8 --alsologtostderr: (3m6.7455242s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-071500
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-071500	172.27.249.25

                                                
                                                
After restart: multinode-071500	172.27.244.98
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500: (12.7676082s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-071500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-071500 logs -n 25: (9.0447405s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-071500 -- apply -f                   | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:28 UTC |                     |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- rollout                    | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o                | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o                | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o                | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o                | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o                | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o                | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o                | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o                | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o                | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o                | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:30 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o                | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:30 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o                | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:30 UTC |                     |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- exec                       | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:30 UTC |                     |
	|         | -- nslookup kubernetes.io                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- exec                       | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:30 UTC |                     |
	|         | -- nslookup kubernetes.default                    |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500                               | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:30 UTC |                     |
	|         | -- exec  -- nslookup                              |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o                | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:30 UTC |                     |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| node    | add -p multinode-071500 -v 3                      | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:31 UTC |                     |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	| node    | multinode-071500 node stop m03                    | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:32 UTC |                     |
	| node    | multinode-071500 node start                       | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:32 UTC |                     |
	|         | m03 -v=7 --alsologtostderr                        |                  |                   |         |                     |                     |
	| node    | list -p multinode-071500                          | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:34 UTC |                     |
	| stop    | -p multinode-071500                               | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:34 UTC | 28 Oct 24 12:34 UTC |
	| start   | -p multinode-071500                               | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:34 UTC | 28 Oct 24 12:38 UTC |
	|         | --wait=true -v=8                                  |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	| node    | list -p multinode-071500                          | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:38 UTC |                     |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:34:56
	Running on machine: minikube6
	Binary: Built with gc go1.23.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:34:56.666427    5536 out.go:345] Setting OutFile to fd 1984 ...
	I1028 12:34:56.751121    5536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:34:56.751121    5536 out.go:358] Setting ErrFile to fd 1492...
	I1028 12:34:56.751121    5536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:34:56.775384    5536 out.go:352] Setting JSON to false
	I1028 12:34:56.779507    5536 start.go:129] hostinfo: {"hostname":"minikube6","uptime":166721,"bootTime":1729952174,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W1028 12:34:56.779507    5536 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 12:34:56.784433    5536 out.go:177] * [multinode-071500] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1028 12:34:56.787119    5536 notify.go:220] Checking for updates...
	I1028 12:34:56.789366    5536 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 12:34:56.791420    5536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:34:56.794380    5536 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I1028 12:34:56.797545    5536 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:34:56.800105    5536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:34:56.803865    5536 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:34:56.804885    5536 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:35:02.400781    5536 out.go:177] * Using the hyperv driver based on existing profile
	I1028 12:35:02.404951    5536 start.go:297] selected driver: hyperv
	I1028 12:35:02.404951    5536 start.go:901] validating driver "hyperv" against &{Name:multinode-071500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.2 ClusterName:multinode-071500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.249.25 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:35:02.405123    5536 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:35:02.454829    5536 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:35:02.454829    5536 cni.go:84] Creating CNI manager for ""
	I1028 12:35:02.454829    5536 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 12:35:02.454829    5536 start.go:340] cluster config:
	{Name:multinode-071500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-071500 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.249.25 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:35:02.455735    5536 iso.go:125] acquiring lock: {Name:mk92685f18db3b9b8a4c28d2a00efac35439147d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:35:02.463217    5536 out.go:177] * Starting "multinode-071500" primary control-plane node in "multinode-071500" cluster
	I1028 12:35:02.465576    5536 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 12:35:02.466508    5536 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1028 12:35:02.466508    5536 cache.go:56] Caching tarball of preloaded images
	I1028 12:35:02.466508    5536 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1028 12:35:02.466508    5536 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 12:35:02.466508    5536 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\config.json ...
	I1028 12:35:02.470011    5536 start.go:360] acquireMachinesLock for multinode-071500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:35:02.470011    5536 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-071500"
	I1028 12:35:02.470011    5536 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:35:02.470011    5536 fix.go:54] fixHost starting: 
	I1028 12:35:02.471012    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:05.135149    5536 main.go:141] libmachine: [stdout =====>] : Off
	
	I1028 12:35:05.135149    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:05.135149    5536 fix.go:112] recreateIfNeeded on multinode-071500: state=Stopped err=<nil>
	W1028 12:35:05.135149    5536 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:35:05.138703    5536 out.go:177] * Restarting existing hyperv VM for "multinode-071500" ...
	I1028 12:35:05.142947    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-071500
	I1028 12:35:08.237568    5536 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:35:08.237568    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:08.237568    5536 main.go:141] libmachine: Waiting for host to start...
	I1028 12:35:08.237568    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:10.525404    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:10.525404    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:10.525404    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:13.053429    5536 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:35:13.053845    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:14.054891    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:16.320644    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:16.320772    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:16.320772    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:18.910405    5536 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:35:18.910405    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:19.910671    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:22.175501    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:22.175501    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:22.175501    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:24.757037    5536 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:35:24.757037    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:25.757313    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:28.049865    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:28.049865    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:28.049865    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:30.652316    5536 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:35:30.652316    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:31.652582    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:33.901681    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:33.901681    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:33.901681    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:36.580198    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:35:36.580198    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:36.584163    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:38.786994    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:38.788222    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:38.788222    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:41.393216    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:35:41.393216    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:41.393579    5536 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\config.json ...
	I1028 12:35:41.397564    5536 machine.go:93] provisionDockerMachine start ...
	I1028 12:35:41.397686    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:43.548993    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:43.548993    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:43.548993    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:46.112252    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:35:46.113009    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:46.119302    5536 main.go:141] libmachine: Using SSH client type: native
	I1028 12:35:46.119550    5536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.244.98 22 <nil> <nil>}
	I1028 12:35:46.120145    5536 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:35:46.250618    5536 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:35:46.250618    5536 buildroot.go:166] provisioning hostname "multinode-071500"
	I1028 12:35:46.250618    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:48.438852    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:48.438930    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:48.438930    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:51.045692    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:35:51.045692    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:51.054209    5536 main.go:141] libmachine: Using SSH client type: native
	I1028 12:35:51.054949    5536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.244.98 22 <nil> <nil>}
	I1028 12:35:51.054949    5536 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-071500 && echo "multinode-071500" | sudo tee /etc/hostname
	I1028 12:35:51.208291    5536 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-071500
	
	I1028 12:35:51.208291    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:53.435254    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:53.436145    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:53.436339    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:56.122595    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:35:56.122595    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:56.128806    5536 main.go:141] libmachine: Using SSH client type: native
	I1028 12:35:56.129413    5536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.244.98 22 <nil> <nil>}
	I1028 12:35:56.129413    5536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-071500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-071500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-071500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:35:56.268107    5536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:35:56.268257    5536 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I1028 12:35:56.268335    5536 buildroot.go:174] setting up certificates
	I1028 12:35:56.268335    5536 provision.go:84] configureAuth start
	I1028 12:35:56.268456    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:58.486808    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:58.486808    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:58.486962    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:01.098358    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:01.098358    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:01.098358    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:03.277569    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:03.277569    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:03.277569    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:05.931981    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:05.932036    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:05.932036    5536 provision.go:143] copyHostCerts
	I1028 12:36:05.932036    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I1028 12:36:05.932570    5536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I1028 12:36:05.932570    5536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I1028 12:36:05.932830    5536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1028 12:36:05.934422    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I1028 12:36:05.934422    5536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I1028 12:36:05.934422    5536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I1028 12:36:05.935237    5536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1028 12:36:05.936018    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I1028 12:36:05.936554    5536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I1028 12:36:05.936658    5536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I1028 12:36:05.936871    5536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I1028 12:36:05.938073    5536 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-071500 san=[127.0.0.1 172.27.244.98 localhost minikube multinode-071500]
	I1028 12:36:06.130120    5536 provision.go:177] copyRemoteCerts
	I1028 12:36:06.141421    5536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:36:06.141421    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:08.361691    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:08.361878    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:08.361878    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:10.952224    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:10.952642    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:10.953177    5536 sshutil.go:53] new ssh client: &{IP:172.27.244.98 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:36:11.056927    5536 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.915451s)
	I1028 12:36:11.056927    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1028 12:36:11.057231    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:36:11.110456    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1028 12:36:11.110720    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1028 12:36:11.157782    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1028 12:36:11.158395    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 12:36:11.204886    5536 provision.go:87] duration metric: took 14.9363823s to configureAuth
	I1028 12:36:11.204886    5536 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:36:11.205924    5536 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:36:11.205924    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:13.339276    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:13.339790    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:13.339790    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:15.917727    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:15.917727    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:15.924302    5536 main.go:141] libmachine: Using SSH client type: native
	I1028 12:36:15.924302    5536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.244.98 22 <nil> <nil>}
	I1028 12:36:15.924302    5536 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 12:36:16.055455    5536 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 12:36:16.055597    5536 buildroot.go:70] root file system type: tmpfs
	I1028 12:36:16.055844    5536 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 12:36:16.055928    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:18.243905    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:18.243905    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:18.244267    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:20.827810    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:20.827873    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:20.833215    5536 main.go:141] libmachine: Using SSH client type: native
	I1028 12:36:20.833353    5536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.244.98 22 <nil> <nil>}
	I1028 12:36:20.833876    5536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 12:36:20.999027    5536 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 12:36:20.999027    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:23.132072    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:23.132072    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:23.132280    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:25.703137    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:25.703390    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:25.708678    5536 main.go:141] libmachine: Using SSH client type: native
	I1028 12:36:25.709204    5536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.244.98 22 <nil> <nil>}
	I1028 12:36:25.709204    5536 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 12:36:27.930469    5536 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1028 12:36:27.930469    5536 machine.go:96] duration metric: took 46.5323796s to provisionDockerMachine
	I1028 12:36:27.930469    5536 start.go:293] postStartSetup for "multinode-071500" (driver="hyperv")
	I1028 12:36:27.931048    5536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:36:27.943014    5536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:36:27.943592    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:30.130085    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:30.130849    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:30.130988    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:32.778603    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:32.778603    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:32.779633    5536 sshutil.go:53] new ssh client: &{IP:172.27.244.98 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:36:32.889212    5536 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9455649s)
	I1028 12:36:32.900128    5536 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:36:32.908145    5536 command_runner.go:130] > NAME=Buildroot
	I1028 12:36:32.908145    5536 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1028 12:36:32.908145    5536 command_runner.go:130] > ID=buildroot
	I1028 12:36:32.908145    5536 command_runner.go:130] > VERSION_ID=2023.02.9
	I1028 12:36:32.908145    5536 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1028 12:36:32.908145    5536 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:36:32.908145    5536 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I1028 12:36:32.908880    5536 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I1028 12:36:32.909635    5536 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> 96082.pem in /etc/ssl/certs
	I1028 12:36:32.909635    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /etc/ssl/certs/96082.pem
	I1028 12:36:32.922670    5536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:36:32.940830    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /etc/ssl/certs/96082.pem (1708 bytes)
	I1028 12:36:32.988594    5536 start.go:296] duration metric: took 5.058067s for postStartSetup
	I1028 12:36:32.988876    5536 fix.go:56] duration metric: took 1m30.5176985s for fixHost
	I1028 12:36:32.988928    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:35.200634    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:35.201363    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:35.201437    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:37.822660    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:37.822660    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:37.828530    5536 main.go:141] libmachine: Using SSH client type: native
	I1028 12:36:37.829340    5536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.244.98 22 <nil> <nil>}
	I1028 12:36:37.829340    5536 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:36:37.958514    5536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730118997.972644227
	
	I1028 12:36:37.958514    5536 fix.go:216] guest clock: 1730118997.972644227
	I1028 12:36:37.958514    5536 fix.go:229] Guest: 2024-10-28 12:36:37.972644227 +0000 UTC Remote: 2024-10-28 12:36:32.9888762 +0000 UTC m=+96.419455301 (delta=4.983768027s)
	I1028 12:36:37.959137    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:40.183180    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:40.183180    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:40.183180    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:42.794436    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:42.794560    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:42.800818    5536 main.go:141] libmachine: Using SSH client type: native
	I1028 12:36:42.800818    5536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.244.98 22 <nil> <nil>}
	I1028 12:36:42.801399    5536 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1730118997
	I1028 12:36:42.943713    5536 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 28 12:36:37 UTC 2024
	
	I1028 12:36:42.943713    5536 fix.go:236] clock set: Mon Oct 28 12:36:37 UTC 2024
	 (err=<nil>)
	I1028 12:36:42.943713    5536 start.go:83] releasing machines lock for "multinode-071500", held for 1m40.472567s
	I1028 12:36:42.943713    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:45.173941    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:45.173941    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:45.174463    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:47.783560    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:47.784148    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:47.788033    5536 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1028 12:36:47.788576    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:47.802257    5536 ssh_runner.go:195] Run: cat /version.json
	I1028 12:36:47.802257    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:50.088629    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:50.089218    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:50.089218    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:50.089218    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:50.089774    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:50.089774    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:52.804920    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:52.805632    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:52.805632    5536 sshutil.go:53] new ssh client: &{IP:172.27.244.98 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:36:52.832441    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:52.832441    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:52.833221    5536 sshutil.go:53] new ssh client: &{IP:172.27.244.98 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:36:52.903763    5536 command_runner.go:130] > {"iso_version": "v1.34.0-1729002252-19806", "kicbase_version": "v0.0.45-1728382586-19774", "minikube_version": "v1.34.0", "commit": "0b046a85be42f4631dd3453091a30d7fc1803a43"}
	I1028 12:36:52.904316    5536 ssh_runner.go:235] Completed: cat /version.json: (5.1020018s)
	I1028 12:36:52.916521    5536 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I1028 12:36:52.916957    5536 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1288657s)
	W1028 12:36:52.916957    5536 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1028 12:36:52.917231    5536 ssh_runner.go:195] Run: systemctl --version
	I1028 12:36:52.926470    5536 command_runner.go:130] > systemd 252 (252)
	I1028 12:36:52.926470    5536 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1028 12:36:52.938643    5536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 12:36:52.947642    5536 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1028 12:36:52.948416    5536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:36:52.959310    5536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:36:52.989431    5536 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1028 12:36:52.989710    5536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:36:52.989933    5536 start.go:495] detecting cgroup driver to use...
	I1028 12:36:52.990513    5536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:36:53.026237    5536 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	W1028 12:36:53.030289    5536 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1028 12:36:53.030471    5536 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1028 12:36:53.038302    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1028 12:36:53.075502    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 12:36:53.098022    5536 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 12:36:53.109780    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 12:36:53.141660    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 12:36:53.173563    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 12:36:53.205980    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 12:36:53.240068    5536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:36:53.274173    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 12:36:53.306396    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 12:36:53.341068    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 12:36:53.373539    5536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:36:53.397892    5536 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:36:53.398542    5536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:36:53.410247    5536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:36:53.444742    5536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:36:53.473148    5536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:36:53.670923    5536 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 12:36:53.703561    5536 start.go:495] detecting cgroup driver to use...
	I1028 12:36:53.716908    5536 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 12:36:53.747325    5536 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1028 12:36:53.747325    5536 command_runner.go:130] > [Unit]
	I1028 12:36:53.747504    5536 command_runner.go:130] > Description=Docker Application Container Engine
	I1028 12:36:53.747548    5536 command_runner.go:130] > Documentation=https://docs.docker.com
	I1028 12:36:53.747583    5536 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1028 12:36:53.747583    5536 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1028 12:36:53.747615    5536 command_runner.go:130] > StartLimitBurst=3
	I1028 12:36:53.747615    5536 command_runner.go:130] > StartLimitIntervalSec=60
	I1028 12:36:53.747615    5536 command_runner.go:130] > [Service]
	I1028 12:36:53.747615    5536 command_runner.go:130] > Type=notify
	I1028 12:36:53.747665    5536 command_runner.go:130] > Restart=on-failure
	I1028 12:36:53.747665    5536 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1028 12:36:53.747665    5536 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1028 12:36:53.747665    5536 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1028 12:36:53.747665    5536 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1028 12:36:53.747665    5536 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1028 12:36:53.747665    5536 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1028 12:36:53.747665    5536 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1028 12:36:53.747665    5536 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1028 12:36:53.747665    5536 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1028 12:36:53.747665    5536 command_runner.go:130] > ExecStart=
	I1028 12:36:53.747665    5536 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1028 12:36:53.747665    5536 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1028 12:36:53.747665    5536 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1028 12:36:53.747665    5536 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1028 12:36:53.747665    5536 command_runner.go:130] > LimitNOFILE=infinity
	I1028 12:36:53.747665    5536 command_runner.go:130] > LimitNPROC=infinity
	I1028 12:36:53.747665    5536 command_runner.go:130] > LimitCORE=infinity
	I1028 12:36:53.747665    5536 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1028 12:36:53.747665    5536 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1028 12:36:53.747665    5536 command_runner.go:130] > TasksMax=infinity
	I1028 12:36:53.747665    5536 command_runner.go:130] > TimeoutStartSec=0
	I1028 12:36:53.747665    5536 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1028 12:36:53.747665    5536 command_runner.go:130] > Delegate=yes
	I1028 12:36:53.747665    5536 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1028 12:36:53.747665    5536 command_runner.go:130] > KillMode=process
	I1028 12:36:53.747665    5536 command_runner.go:130] > [Install]
	I1028 12:36:53.747665    5536 command_runner.go:130] > WantedBy=multi-user.target
	I1028 12:36:53.760837    5536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:36:53.800047    5536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:36:53.844682    5536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:36:53.880744    5536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 12:36:53.915134    5536 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1028 12:36:53.994892    5536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 12:36:54.019684    5536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:36:54.056764    5536 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1028 12:36:54.067116    5536 ssh_runner.go:195] Run: which cri-dockerd
	I1028 12:36:54.073667    5536 command_runner.go:130] > /usr/bin/cri-dockerd
	I1028 12:36:54.084505    5536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 12:36:54.104625    5536 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1028 12:36:54.149881    5536 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 12:36:54.365991    5536 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 12:36:54.567365    5536 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 12:36:54.567651    5536 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 12:36:54.611059    5536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:36:54.841683    5536 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 12:36:57.431947    5536 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5887535s)
	I1028 12:36:57.445563    5536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 12:36:57.485901    5536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 12:36:57.519746    5536 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 12:36:57.725369    5536 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 12:36:57.921411    5536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:36:58.128297    5536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 12:36:58.168705    5536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 12:36:58.206264    5536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:36:58.416540    5536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 12:36:58.531602    5536 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 12:36:58.543182    5536 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 12:36:58.551838    5536 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1028 12:36:58.551838    5536 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1028 12:36:58.551838    5536 command_runner.go:130] > Device: 0,22	Inode: 857         Links: 1
	I1028 12:36:58.551925    5536 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1028 12:36:58.551925    5536 command_runner.go:130] > Access: 2024-10-28 12:36:58.456643614 +0000
	I1028 12:36:58.551925    5536 command_runner.go:130] > Modify: 2024-10-28 12:36:58.456643614 +0000
	I1028 12:36:58.551925    5536 command_runner.go:130] > Change: 2024-10-28 12:36:58.461643630 +0000
	I1028 12:36:58.551983    5536 command_runner.go:130] >  Birth: -
	I1028 12:36:58.551983    5536 start.go:563] Will wait 60s for crictl version
	I1028 12:36:58.563774    5536 ssh_runner.go:195] Run: which crictl
	I1028 12:36:58.570588    5536 command_runner.go:130] > /usr/bin/crictl
	I1028 12:36:58.581313    5536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:36:58.636732    5536 command_runner.go:130] > Version:  0.1.0
	I1028 12:36:58.636732    5536 command_runner.go:130] > RuntimeName:  docker
	I1028 12:36:58.636732    5536 command_runner.go:130] > RuntimeVersion:  27.3.1
	I1028 12:36:58.636851    5536 command_runner.go:130] > RuntimeApiVersion:  v1
	I1028 12:36:58.636851    5536 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1028 12:36:58.646966    5536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 12:36:58.681716    5536 command_runner.go:130] > 27.3.1
	I1028 12:36:58.694333    5536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 12:36:58.734395    5536 command_runner.go:130] > 27.3.1
	I1028 12:36:58.740974    5536 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1028 12:36:58.741085    5536 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1028 12:36:58.744987    5536 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1028 12:36:58.744987    5536 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1028 12:36:58.744987    5536 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1028 12:36:58.744987    5536 ip.go:211] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:23:7f:d2 Flags:up|broadcast|multicast|running}
	I1028 12:36:58.747683    5536 ip.go:214] interface addr: fe80::866e:dfb9:193a:741e/64
	I1028 12:36:58.748635    5536 ip.go:214] interface addr: 172.27.240.1/20
	I1028 12:36:58.759086    5536 ssh_runner.go:195] Run: grep 172.27.240.1	host.minikube.internal$ /etc/hosts
	I1028 12:36:58.766328    5536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:36:58.790382    5536 kubeadm.go:883] updating cluster {Name:multinode-071500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.2 ClusterName:multinode-071500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.244.98 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:36:58.790506    5536 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 12:36:58.800555    5536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 12:36:58.825491    5536 docker.go:689] Got preloaded images: 
	I1028 12:36:58.825491    5536 docker.go:695] registry.k8s.io/kube-apiserver:v1.31.2 wasn't preloaded
	I1028 12:36:58.837616    5536 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 12:36:58.856972    5536 command_runner.go:139] > {"Repositories":{}}
	I1028 12:36:58.867260    5536 ssh_runner.go:195] Run: which lz4
	I1028 12:36:58.873529    5536 command_runner.go:130] > /usr/bin/lz4
	I1028 12:36:58.873529    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1028 12:36:58.883881    5536 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:36:58.891179    5536 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:36:58.891179    5536 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:36:58.891386    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (343199686 bytes)
	I1028 12:37:01.194326    5536 docker.go:653] duration metric: took 2.3207702s to copy over tarball
	I1028 12:37:01.204238    5536 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:37:09.489265    5536 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.2849333s)
	I1028 12:37:09.489411    5536 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:37:09.553645    5536 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 12:37:09.572370    5536 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.3":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.15-0":"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a":"sha256:2e96e5913fc06e3d26915af3d0f
2ca5048cc4b6327e661e80da792cbf8d8d9d4"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.31.2":"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0":"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.31.2":"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752":"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.31.2":"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe":"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd
10d47de7a0c2d38"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.31.2":"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282":"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.10":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136"}}}
	I1028 12:37:09.572370    5536 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1028 12:37:10.755487    5536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:37:10.971723    5536 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 12:37:13.713320    5536 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7415658s)
	I1028 12:37:13.723662    5536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 12:37:13.749233    5536 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:37:13.749233    5536 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:37:13.749233    5536 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:37:13.749233    5536 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:37:13.749233    5536 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:37:13.749233    5536 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I1028 12:37:13.749233    5536 command_runner.go:130] > registry.k8s.io/pause:3.10
	I1028 12:37:13.749233    5536 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:37:13.749446    5536 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 12:37:13.749446    5536 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:37:13.749540    5536 kubeadm.go:934] updating node { 172.27.244.98 8443 v1.31.2 docker true true} ...
	I1028 12:37:13.749874    5536 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-071500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.244.98
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-071500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:37:13.759276    5536 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1028 12:37:13.827758    5536 command_runner.go:130] > cgroupfs
	I1028 12:37:13.827882    5536 cni.go:84] Creating CNI manager for ""
	I1028 12:37:13.827882    5536 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 12:37:13.827882    5536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:37:13.828102    5536 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.244.98 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-071500 NodeName:multinode-071500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.244.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.244.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:37:13.828176    5536 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.244.98
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-071500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.27.244.98"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.244.98"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:37:13.839685    5536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:37:13.858323    5536 command_runner.go:130] > kubeadm
	I1028 12:37:13.858706    5536 command_runner.go:130] > kubectl
	I1028 12:37:13.858706    5536 command_runner.go:130] > kubelet
	I1028 12:37:13.858899    5536 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:37:13.871762    5536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:37:13.888986    5536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1028 12:37:13.921445    5536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:37:13.953827    5536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1028 12:37:14.000462    5536 ssh_runner.go:195] Run: grep 172.27.244.98	control-plane.minikube.internal$ /etc/hosts
	I1028 12:37:14.007451    5536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.244.98	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:37:14.042964    5536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:37:14.253260    5536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:37:14.286713    5536 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500 for IP: 172.27.244.98
	I1028 12:37:14.286788    5536 certs.go:194] generating shared ca certs ...
	I1028 12:37:14.286788    5536 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:14.287681    5536 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I1028 12:37:14.287753    5536 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I1028 12:37:14.287753    5536 certs.go:256] generating profile certs ...
	I1028 12:37:14.289010    5536 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\client.key
	I1028 12:37:14.289010    5536 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\client.crt with IP's: []
	I1028 12:37:14.594411    5536 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\client.crt ...
	I1028 12:37:14.594411    5536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\client.crt: {Name:mk1f5e585e0e9ad0432871d547ee6c6b1ba991a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:14.596368    5536 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\client.key ...
	I1028 12:37:14.596368    5536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\client.key: {Name:mk8fa754fe6c198907533302a4c7b316f4588580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:14.597362    5536 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.key.c3f56259
	I1028 12:37:14.598011    5536 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.crt.c3f56259 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.244.98]
	I1028 12:37:14.791678    5536 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.crt.c3f56259 ...
	I1028 12:37:14.791678    5536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.crt.c3f56259: {Name:mka8e1efedf7e1deef86e5fd8565257166d7c19e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:14.793243    5536 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.key.c3f56259 ...
	I1028 12:37:14.793243    5536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.key.c3f56259: {Name:mke3e5e7965f50f386299c9c24bf21b96f6b90ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:14.794249    5536 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.crt.c3f56259 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.crt
	I1028 12:37:14.807353    5536 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.key.c3f56259 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.key
	I1028 12:37:14.809346    5536 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.key
	I1028 12:37:14.809346    5536 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.crt with IP's: []
	I1028 12:37:15.045583    5536 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.crt ...
	I1028 12:37:15.045583    5536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.crt: {Name:mkfe7fa30da62946c38b24010c9b77700ad691e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:15.046648    5536 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.key ...
	I1028 12:37:15.046648    5536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.key: {Name:mk70e1f18db667753ce0e2dac5958f21cb8425aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:15.047732    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 12:37:15.048403    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1028 12:37:15.048665    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 12:37:15.048803    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 12:37:15.048962    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 12:37:15.048962    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 12:37:15.048962    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 12:37:15.060654    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 12:37:15.061615    5536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem (1338 bytes)
	W1028 12:37:15.061615    5536 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608_empty.pem, impossibly tiny 0 bytes
	I1028 12:37:15.061615    5536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1028 12:37:15.062702    5536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1028 12:37:15.062971    5536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1028 12:37:15.063259    5536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1028 12:37:15.063530    5536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem (1708 bytes)
	I1028 12:37:15.063530    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem -> /usr/share/ca-certificates/9608.pem
	I1028 12:37:15.063530    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /usr/share/ca-certificates/96082.pem
	I1028 12:37:15.064340    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:37:15.064710    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:37:15.116149    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:37:15.169421    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:37:15.224260    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 12:37:15.273495    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 12:37:15.321830    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:37:15.377389    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:37:15.419419    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:37:15.472334    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem --> /usr/share/ca-certificates/9608.pem (1338 bytes)
	I1028 12:37:15.523430    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /usr/share/ca-certificates/96082.pem (1708 bytes)
	I1028 12:37:15.572078    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:37:15.622398    5536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:37:15.668379    5536 ssh_runner.go:195] Run: openssl version
	I1028 12:37:15.677497    5536 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1028 12:37:15.689328    5536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9608.pem && ln -fs /usr/share/ca-certificates/9608.pem /etc/ssl/certs/9608.pem"
	I1028 12:37:15.725265    5536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9608.pem
	I1028 12:37:15.732223    5536 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 28 11:03 /usr/share/ca-certificates/9608.pem
	I1028 12:37:15.733120    5536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:03 /usr/share/ca-certificates/9608.pem
	I1028 12:37:15.748148    5536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9608.pem
	I1028 12:37:15.757749    5536 command_runner.go:130] > 51391683
	I1028 12:37:15.770018    5536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9608.pem /etc/ssl/certs/51391683.0"
	I1028 12:37:15.802724    5536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96082.pem && ln -fs /usr/share/ca-certificates/96082.pem /etc/ssl/certs/96082.pem"
	I1028 12:37:15.835085    5536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96082.pem
	I1028 12:37:15.842025    5536 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 28 11:03 /usr/share/ca-certificates/96082.pem
	I1028 12:37:15.842025    5536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:03 /usr/share/ca-certificates/96082.pem
	I1028 12:37:15.853022    5536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96082.pem
	I1028 12:37:15.862933    5536 command_runner.go:130] > 3ec20f2e
	I1028 12:37:15.874352    5536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96082.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:37:15.911805    5536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:37:15.946874    5536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:37:15.954118    5536 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 28 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:37:15.954211    5536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:37:15.965557    5536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:37:15.975907    5536 command_runner.go:130] > b5213941
	I1028 12:37:15.987441    5536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:37:16.021190    5536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:37:16.029055    5536 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 12:37:16.029055    5536 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 12:37:16.029055    5536 kubeadm.go:392] StartCluster: {Name:multinode-071500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.2 ClusterName:multinode-071500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.244.98 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:37:16.043248    5536 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 12:37:16.093845    5536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:37:16.111538    5536 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1028 12:37:16.111538    5536 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1028 12:37:16.111538    5536 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1028 12:37:16.124364    5536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:37:16.155900    5536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:37:16.178484    5536 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1028 12:37:16.178484    5536 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1028 12:37:16.178484    5536 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1028 12:37:16.178484    5536 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:37:16.179481    5536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:37:16.179481    5536 kubeadm.go:157] found existing configuration files:
	
	I1028 12:37:16.191620    5536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:37:16.210306    5536 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:37:16.211312    5536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:37:16.222304    5536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:37:16.254282    5536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:37:16.274109    5536 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:37:16.274109    5536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:37:16.286292    5536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:37:16.318848    5536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:37:16.336488    5536 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:37:16.336488    5536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:37:16.347529    5536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:37:16.377438    5536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:37:16.396526    5536 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:37:16.397077    5536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:37:16.409528    5536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:37:16.428468    5536 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:37:16.924653    5536 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:37:16.924702    5536 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:37:31.100298    5536 command_runner.go:130] > [init] Using Kubernetes version: v1.31.2
	I1028 12:37:31.100368    5536 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 12:37:31.100506    5536 command_runner.go:130] > [preflight] Running pre-flight checks
	I1028 12:37:31.100563    5536 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:37:31.100738    5536 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:37:31.100770    5536 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:37:31.100891    5536 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:37:31.100953    5536 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:37:31.101098    5536 command_runner.go:130] > [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:37:31.101180    5536 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:37:31.101428    5536 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:37:31.101428    5536 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:37:31.107557    5536 out.go:235]   - Generating certificates and keys ...
	I1028 12:37:31.107557    5536 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1028 12:37:31.107557    5536 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:37:31.107557    5536 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:37:31.107557    5536 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1028 12:37:31.107557    5536 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 12:37:31.107557    5536 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 12:37:31.107557    5536 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 12:37:31.107557    5536 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1028 12:37:31.107557    5536 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 12:37:31.107557    5536 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1028 12:37:31.107557    5536 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1028 12:37:31.107557    5536 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 12:37:31.108998    5536 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1028 12:37:31.108998    5536 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 12:37:31.109273    5536 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-071500] and IPs [172.27.244.98 127.0.0.1 ::1]
	I1028 12:37:31.109273    5536 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-071500] and IPs [172.27.244.98 127.0.0.1 ::1]
	I1028 12:37:31.109273    5536 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 12:37:31.109273    5536 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1028 12:37:31.109892    5536 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-071500] and IPs [172.27.244.98 127.0.0.1 ::1]
	I1028 12:37:31.109892    5536 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-071500] and IPs [172.27.244.98 127.0.0.1 ::1]
	I1028 12:37:31.109892    5536 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 12:37:31.109892    5536 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 12:37:31.109892    5536 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 12:37:31.109892    5536 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 12:37:31.109892    5536 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 12:37:31.109892    5536 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1028 12:37:31.110447    5536 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:37:31.110447    5536 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:37:31.110596    5536 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:37:31.110660    5536 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:37:31.110820    5536 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:37:31.110820    5536 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:37:31.110820    5536 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:37:31.110820    5536 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:37:31.111205    5536 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:37:31.111205    5536 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:37:31.111480    5536 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:37:31.111480    5536 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:37:31.111773    5536 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:37:31.111773    5536 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:37:31.112001    5536 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:37:31.112001    5536 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:37:31.117218    5536 out.go:235]   - Booting up control plane ...
	I1028 12:37:31.118311    5536 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:37:31.118311    5536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:37:31.118311    5536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:37:31.118311    5536 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:37:31.118311    5536 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:37:31.118311    5536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:37:31.118311    5536 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:37:31.119112    5536 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:37:31.119364    5536 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:37:31.119364    5536 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:37:31.119537    5536 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:37:31.119537    5536 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1028 12:37:31.119657    5536 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:37:31.119657    5536 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:37:31.119657    5536 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:37:31.119657    5536 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:37:31.120281    5536 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.907019ms
	I1028 12:37:31.120281    5536 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.907019ms
	I1028 12:37:31.120476    5536 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:37:31.120476    5536 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:37:31.120523    5536 command_runner.go:130] > [api-check] The API server is healthy after 7.502651172s
	I1028 12:37:31.120523    5536 kubeadm.go:310] [api-check] The API server is healthy after 7.502651172s
	I1028 12:37:31.120816    5536 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:37:31.120816    5536 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:37:31.121113    5536 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:37:31.121113    5536 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:37:31.121113    5536 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:37:31.121113    5536 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:37:31.121614    5536 command_runner.go:130] > [mark-control-plane] Marking the node multinode-071500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:37:31.121614    5536 kubeadm.go:310] [mark-control-plane] Marking the node multinode-071500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:37:31.121614    5536 command_runner.go:130] > [bootstrap-token] Using token: pbes7g.dxbg0wnwf67644gb
	I1028 12:37:31.121614    5536 kubeadm.go:310] [bootstrap-token] Using token: pbes7g.dxbg0wnwf67644gb
	I1028 12:37:31.125981    5536 out.go:235]   - Configuring RBAC rules ...
	I1028 12:37:31.127043    5536 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:37:31.127043    5536 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:37:31.127226    5536 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:37:31.127349    5536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:37:31.127592    5536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:37:31.127592    5536 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:37:31.127842    5536 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:37:31.127842    5536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:37:31.128115    5536 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:37:31.128115    5536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:37:31.128512    5536 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:37:31.128512    5536 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:37:31.128749    5536 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:37:31.128749    5536 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:37:31.128749    5536 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1028 12:37:31.128749    5536 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 12:37:31.128749    5536 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1028 12:37:31.128749    5536 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 12:37:31.128749    5536 kubeadm.go:310] 
	I1028 12:37:31.129298    5536 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1028 12:37:31.129298    5536 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 12:37:31.129298    5536 kubeadm.go:310] 
	I1028 12:37:31.129489    5536 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 12:37:31.129489    5536 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1028 12:37:31.129489    5536 kubeadm.go:310] 
	I1028 12:37:31.129489    5536 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 12:37:31.129489    5536 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1028 12:37:31.129809    5536 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:37:31.129809    5536 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:37:31.130024    5536 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:37:31.130024    5536 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:37:31.130024    5536 kubeadm.go:310] 
	I1028 12:37:31.130222    5536 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 12:37:31.130222    5536 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1028 12:37:31.130222    5536 kubeadm.go:310] 
	I1028 12:37:31.130222    5536 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:37:31.130471    5536 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:37:31.130577    5536 kubeadm.go:310] 
	I1028 12:37:31.130687    5536 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 12:37:31.130687    5536 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1028 12:37:31.130687    5536 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:37:31.130687    5536 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:37:31.130687    5536 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:37:31.130687    5536 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:37:31.130687    5536 kubeadm.go:310] 
	I1028 12:37:31.130687    5536 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:37:31.131230    5536 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:37:31.131445    5536 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1028 12:37:31.131445    5536 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 12:37:31.131445    5536 kubeadm.go:310] 
	I1028 12:37:31.131661    5536 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pbes7g.dxbg0wnwf67644gb \
	I1028 12:37:31.131661    5536 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token pbes7g.dxbg0wnwf67644gb \
	I1028 12:37:31.131920    5536 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b \
	I1028 12:37:31.131920    5536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b \
	I1028 12:37:31.131920    5536 command_runner.go:130] > 	--control-plane 
	I1028 12:37:31.131920    5536 kubeadm.go:310] 	--control-plane 
	I1028 12:37:31.131920    5536 kubeadm.go:310] 
	I1028 12:37:31.132299    5536 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:37:31.132299    5536 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:37:31.132299    5536 kubeadm.go:310] 
	I1028 12:37:31.132433    5536 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token pbes7g.dxbg0wnwf67644gb \
	I1028 12:37:31.132467    5536 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pbes7g.dxbg0wnwf67644gb \
	I1028 12:37:31.132562    5536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b 
	I1028 12:37:31.132562    5536 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b 
	I1028 12:37:31.132562    5536 cni.go:84] Creating CNI manager for ""
	I1028 12:37:31.132562    5536 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 12:37:31.134922    5536 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 12:37:31.150337    5536 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 12:37:31.158678    5536 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1028 12:37:31.158678    5536 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I1028 12:37:31.158678    5536 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I1028 12:37:31.158888    5536 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1028 12:37:31.158888    5536 command_runner.go:130] > Access: 2024-10-28 12:35:34.092376600 +0000
	I1028 12:37:31.158888    5536 command_runner.go:130] > Modify: 2024-10-15 20:14:00.000000000 +0000
	I1028 12:37:31.158888    5536 command_runner.go:130] > Change: 2024-10-28 12:35:25.488000000 +0000
	I1028 12:37:31.158888    5536 command_runner.go:130] >  Birth: -
	I1028 12:37:31.159021    5536 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 12:37:31.159099    5536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 12:37:31.215841    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 12:37:32.010863    5536 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1028 12:37:32.010863    5536 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1028 12:37:32.010863    5536 command_runner.go:130] > serviceaccount/kindnet created
	I1028 12:37:32.010863    5536 command_runner.go:130] > daemonset.apps/kindnet created
	I1028 12:37:32.011046    5536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:37:32.025538    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:32.027637    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-071500 minikube.k8s.io/updated_at=2024_10_28T12_37_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=multinode-071500 minikube.k8s.io/primary=true
	I1028 12:37:32.048155    5536 command_runner.go:130] > -16
	I1028 12:37:32.048318    5536 ops.go:34] apiserver oom_adj: -16
	I1028 12:37:32.205967    5536 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1028 12:37:32.216569    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:32.259764    5536 command_runner.go:130] > node/multinode-071500 labeled
	I1028 12:37:32.359935    5536 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1028 12:37:32.718905    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:32.848265    5536 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1028 12:37:33.218286    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:33.332739    5536 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1028 12:37:33.718812    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:33.864455    5536 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1028 12:37:34.218310    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:34.334176    5536 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1028 12:37:34.717954    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:34.832944    5536 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1028 12:37:35.218331    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:35.347964    5536 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1028 12:37:35.720077    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:35.905700    5536 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1028 12:37:36.216691    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:36.393053    5536 command_runner.go:130] > NAME      SECRETS   AGE
	I1028 12:37:36.393053    5536 command_runner.go:130] > default   0         1s
	I1028 12:37:36.396072    5536 kubeadm.go:1113] duration metric: took 4.3849762s to wait for elevateKubeSystemPrivileges
	I1028 12:37:36.396179    5536 kubeadm.go:394] duration metric: took 20.3668939s to StartCluster
	I1028 12:37:36.396179    5536 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:36.396456    5536 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 12:37:36.399063    5536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:36.400502    5536 start.go:235] Will wait 6m0s for node &{Name: IP:172.27.244.98 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 12:37:36.400502    5536 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:37:36.400502    5536 addons.go:69] Setting storage-provisioner=true in profile "multinode-071500"
	I1028 12:37:36.401212    5536 addons.go:234] Setting addon storage-provisioner=true in "multinode-071500"
	I1028 12:37:36.401212    5536 host.go:66] Checking if "multinode-071500" exists ...
	I1028 12:37:36.400502    5536 addons.go:69] Setting default-storageclass=true in profile "multinode-071500"
	I1028 12:37:36.401212    5536 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-071500"
	I1028 12:37:36.401212    5536 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:37:36.401938    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:37:36.401938    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:37:36.404336    5536 out.go:177] * Verifying Kubernetes components...
	I1028 12:37:36.429978    5536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:37:36.857284    5536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:37:36.910213    5536 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 12:37:36.911249    5536 kapi.go:59] client config for multinode-071500: &rest.Config{Host:"https://172.27.244.98:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-071500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-071500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29cb3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 12:37:36.913828    5536 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 12:37:36.916837    5536 node_ready.go:35] waiting up to 6m0s for node "multinode-071500" to be "Ready" ...
	I1028 12:37:36.916837    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:36.916837    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:36.916837    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:36.916837    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:36.946331    5536 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I1028 12:37:36.946331    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:36.946331    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:36.946331    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:36 GMT
	I1028 12:37:36.946331    5536 round_trippers.go:580]     Audit-Id: 13a439ec-b907-4f16-962f-6da84ad0663a
	I1028 12:37:36.946331    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:36.946331    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:36.946331    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:36.946869    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:37.417826    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:37.417826    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:37.417826    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:37.417826    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:37.428199    5536 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1028 12:37:37.428332    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:37.428332    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:37.428332    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:37.428332    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:37.428332    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:37 GMT
	I1028 12:37:37.428332    5536 round_trippers.go:580]     Audit-Id: de281d49-2b8c-4166-864e-7eae9acd0b40
	I1028 12:37:37.428332    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:37.429228    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:37.917808    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:37.917808    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:37.917808    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:37.917808    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:37.923211    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:37.923300    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:37.923300    5536 round_trippers.go:580]     Audit-Id: 315c1fad-90e1-49fd-83f4-a2e4746b5d55
	I1028 12:37:37.923300    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:37.923300    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:37.923300    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:37.923300    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:37.923300    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:37 GMT
	I1028 12:37:37.923685    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:38.418206    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:38.418206    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:38.418206    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:38.418206    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:38.422841    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:38.422955    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:38.423027    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:38 GMT
	I1028 12:37:38.423027    5536 round_trippers.go:580]     Audit-Id: dd40aad6-d607-403b-83c7-35647833649a
	I1028 12:37:38.423027    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:38.423090    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:38.423090    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:38.423090    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:38.423353    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:38.770369    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:37:38.770369    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:37:38.773361    5536 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:37:38.775361    5536 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:37:38.775361    5536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:37:38.775361    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:37:38.954378    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:38.954378    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:38.954378    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:38.954378    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:38.965371    5536 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1028 12:37:38.965371    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:38.965371    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:38.965371    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:38 GMT
	I1028 12:37:38.965371    5536 round_trippers.go:580]     Audit-Id: c253f90b-9a90-4032-9191-0d5c8f130bea
	I1028 12:37:38.965371    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:38.965371    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:38.965371    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:38.965371    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:38.966374    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:38.982786    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:37:38.982786    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:37:38.983650    5536 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 12:37:38.984411    5536 kapi.go:59] client config for multinode-071500: &rest.Config{Host:"https://172.27.244.98:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-071500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-071500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29cb3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 12:37:38.985810    5536 addons.go:234] Setting addon default-storageclass=true in "multinode-071500"
	I1028 12:37:38.985810    5536 host.go:66] Checking if "multinode-071500" exists ...
	I1028 12:37:38.986556    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:37:39.418155    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:39.418155    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:39.418155    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:39.418155    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:39.423324    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:39.423324    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:39.423324    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:39.423324    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:39 GMT
	I1028 12:37:39.423324    5536 round_trippers.go:580]     Audit-Id: 11bf0cd5-6e33-43b8-b1ff-1c1207d2d78d
	I1028 12:37:39.423426    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:39.423426    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:39.423426    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:39.423929    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:39.917623    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:39.917623    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:39.917623    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:39.917623    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:39.922072    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:39.922072    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:39.922072    5536 round_trippers.go:580]     Audit-Id: 54c92998-a915-4dde-ad0e-de6bb6b75701
	I1028 12:37:39.922072    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:39.922072    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:39.922072    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:39.922072    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:39.922072    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:39 GMT
	I1028 12:37:39.922864    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:40.417917    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:40.417917    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:40.417917    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:40.417917    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:40.423318    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:40.423318    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:40.423318    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:40.423318    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:40.423318    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:40.423318    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:40 GMT
	I1028 12:37:40.423318    5536 round_trippers.go:580]     Audit-Id: 26e8d931-18e5-4909-aacc-28c97dc00a2b
	I1028 12:37:40.423318    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:40.423911    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:40.917365    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:40.917365    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:40.917365    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:40.917365    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:40.922654    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:40.922794    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:40.922794    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:40.922861    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:40 GMT
	I1028 12:37:40.922861    5536 round_trippers.go:580]     Audit-Id: dcf84a97-d333-497e-af40-52abb6836f11
	I1028 12:37:40.922861    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:40.922861    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:40.922861    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:40.923111    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:41.269550    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:37:41.269550    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:37:41.269650    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:37:41.394147    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:37:41.394147    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:37:41.394225    5536 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:37:41.394225    5536 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:37:41.394225    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:37:41.417271    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:41.417271    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:41.417271    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:41.417271    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:41.423434    5536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 12:37:41.423434    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:41.423566    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:41.423566    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:41.423566    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:41 GMT
	I1028 12:37:41.423566    5536 round_trippers.go:580]     Audit-Id: 04bb6ff6-0c2a-4d9a-8f3d-2ce48de01279
	I1028 12:37:41.423566    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:41.423566    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:41.424442    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:41.425164    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:41.917272    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:41.917272    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:41.917272    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:41.917272    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:41.920301    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:41.920301    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:41.920301    5536 round_trippers.go:580]     Audit-Id: 5daa03f7-44ed-4e70-89e7-117527675faa
	I1028 12:37:41.920301    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:41.920301    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:41.920301    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:41.920301    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:41.920301    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:41 GMT
	I1028 12:37:41.921311    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:42.418285    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:42.418285    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:42.418285    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:42.418285    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:42.423806    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:42.423806    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:42.423913    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:42 GMT
	I1028 12:37:42.423913    5536 round_trippers.go:580]     Audit-Id: 769b8895-5aa0-4a3a-b1ef-eff9550c8432
	I1028 12:37:42.424120    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:42.424120    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:42.424232    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:42.424232    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:42.425586    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:42.917897    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:42.917897    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:42.917897    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:42.917897    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:42.922254    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:42.922318    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:42.922318    5536 round_trippers.go:580]     Audit-Id: f2f91884-e6c2-41d6-bf3c-a1665ec41df1
	I1028 12:37:42.922318    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:42.922318    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:42.922318    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:42.922318    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:42.922318    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:42 GMT
	I1028 12:37:42.922594    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:43.417872    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:43.417872    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:43.417872    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:43.417872    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:43.423144    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:43.423144    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:43.423144    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:43.423234    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:43.423234    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:43 GMT
	I1028 12:37:43.423234    5536 round_trippers.go:580]     Audit-Id: 55f2a4dd-63e5-4c79-86d1-c34480cf5d94
	I1028 12:37:43.423234    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:43.423234    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:43.423681    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:43.720029    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:37:43.720029    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:37:43.720129    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:37:43.917499    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:43.917499    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:43.917499    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:43.917499    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:44.021466    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:37:44.021466    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:37:44.023015    5536 sshutil.go:53] new ssh client: &{IP:172.27.244.98 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:37:44.135798    5536 round_trippers.go:574] Response Status: 200 OK in 218 milliseconds
	I1028 12:37:44.135798    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:44.135798    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:44.135798    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:44.135798    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:44 GMT
	I1028 12:37:44.135798    5536 round_trippers.go:580]     Audit-Id: acd70d5d-993b-4752-ad93-e9477064e77b
	I1028 12:37:44.135798    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:44.135798    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:44.136800    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:44.136800    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:44.173816    5536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:37:44.417577    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:44.417577    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:44.417577    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:44.417577    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:44.428062    5536 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1028 12:37:44.428161    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:44.428161    5536 round_trippers.go:580]     Audit-Id: 69d74b5e-baad-4175-a585-8199d7c70df6
	I1028 12:37:44.428161    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:44.428161    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:44.428266    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:44.428266    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:44.428266    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:44 GMT
	I1028 12:37:44.431704    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:44.843361    5536 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1028 12:37:44.843501    5536 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1028 12:37:44.843501    5536 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1028 12:37:44.843501    5536 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1028 12:37:44.843501    5536 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1028 12:37:44.843501    5536 command_runner.go:130] > pod/storage-provisioner created
	I1028 12:37:44.916989    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:44.916989    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:44.916989    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:44.916989    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:44.920985    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:44.921611    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:44.921611    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:44.921611    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:44.921611    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:44 GMT
	I1028 12:37:44.921611    5536 round_trippers.go:580]     Audit-Id: feb5aeb9-98ce-4f04-9924-2e80dec153be
	I1028 12:37:44.921611    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:44.921611    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:44.922132    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:45.417238    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:45.417238    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:45.417238    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:45.417238    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:45.422779    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:45.422779    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:45.422779    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:45.422779    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:45.422779    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:45.422779    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:45.423015    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:45 GMT
	I1028 12:37:45.423015    5536 round_trippers.go:580]     Audit-Id: fba930e6-bd4d-486b-a240-6142705fb9be
	I1028 12:37:45.424058    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:45.917497    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:45.917497    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:45.917497    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:45.917497    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:45.922452    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:45.922577    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:45.922577    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:45.922577    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:45 GMT
	I1028 12:37:45.922577    5536 round_trippers.go:580]     Audit-Id: 54158cca-94b6-442f-b57a-4ea5fc4faa01
	I1028 12:37:45.922577    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:45.922577    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:45.922577    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:45.922577    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:46.345098    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:37:46.345098    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:37:46.346577    5536 sshutil.go:53] new ssh client: &{IP:172.27.244.98 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:37:46.417878    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:46.417878    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:46.417878    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:46.417878    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:46.422079    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:46.422079    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:46.422079    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:46.422079    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:46 GMT
	I1028 12:37:46.422079    5536 round_trippers.go:580]     Audit-Id: 32772b43-c260-4e49-8b78-5bb7a744b894
	I1028 12:37:46.422079    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:46.422079    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:46.422079    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:46.422079    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:46.422985    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:46.480260    5536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:37:46.645584    5536 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1028 12:37:46.645584    5536 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 12:37:46.645584    5536 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 12:37:46.646580    5536 round_trippers.go:463] GET https://172.27.244.98:8443/apis/storage.k8s.io/v1/storageclasses
	I1028 12:37:46.646580    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:46.646580    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:46.646580    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:46.650458    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:46.650531    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:46.650531    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:46.650531    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:46.650531    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:46.650531    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:46.650531    5536 round_trippers.go:580]     Content-Length: 1273
	I1028 12:37:46.650531    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:46 GMT
	I1028 12:37:46.650531    5536 round_trippers.go:580]     Audit-Id: bd43eef9-70df-416d-819b-444843a35d1c
	I1028 12:37:46.650531    5536 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"405"},"items":[{"metadata":{"name":"standard","uid":"b8cab4a0-88a1-4868-a1a9-823301b9aaf3","resourceVersion":"405","creationTimestamp":"2024-10-28T12:37:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-28T12:37:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1028 12:37:46.651434    5536 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b8cab4a0-88a1-4868-a1a9-823301b9aaf3","resourceVersion":"405","creationTimestamp":"2024-10-28T12:37:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-28T12:37:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1028 12:37:46.651434    5536 round_trippers.go:463] PUT https://172.27.244.98:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 12:37:46.651434    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:46.651434    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:46.651434    5536 round_trippers.go:473]     Content-Type: application/json
	I1028 12:37:46.651434    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:46.655347    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:46.655431    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:46.655431    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:46.655431    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:46.655431    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:46.655431    5536 round_trippers.go:580]     Content-Length: 1220
	I1028 12:37:46.655431    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:46 GMT
	I1028 12:37:46.655431    5536 round_trippers.go:580]     Audit-Id: 1d5a47d0-08fd-4583-a252-45d96ff04aa1
	I1028 12:37:46.655431    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:46.655431    5536 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b8cab4a0-88a1-4868-a1a9-823301b9aaf3","resourceVersion":"405","creationTimestamp":"2024-10-28T12:37:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-28T12:37:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1028 12:37:46.658587    5536 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 12:37:46.662914    5536 addons.go:510] duration metric: took 10.2622955s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 12:37:46.917326    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:46.917326    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:46.917326    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:46.917326    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:46.921733    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:46.922255    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:46.922255    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:46.922255    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:46.922255    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:46.922255    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:46.922255    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:46 GMT
	I1028 12:37:46.922255    5536 round_trippers.go:580]     Audit-Id: a4da76dc-e390-4d81-bc00-39c695dfc6b5
	I1028 12:37:46.922615    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:47.417210    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:47.417847    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:47.417847    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:47.417847    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:47.421611    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:47.421611    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:47.421611    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:47.421611    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:47 GMT
	I1028 12:37:47.421611    5536 round_trippers.go:580]     Audit-Id: 802cd577-73e8-4ee7-bf57-aa2cba12dac4
	I1028 12:37:47.421611    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:47.421611    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:47.421611    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:47.421829    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:47.918058    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:47.918058    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:47.918058    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:47.918058    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:47.922893    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:47.923004    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:47.923004    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:47.923004    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:47.923004    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:47 GMT
	I1028 12:37:47.923004    5536 round_trippers.go:580]     Audit-Id: fcaac02e-9221-4516-9547-bb246eee81fc
	I1028 12:37:47.923004    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:47.923004    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:47.923238    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:48.417365    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:48.417365    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:48.417365    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:48.417365    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:48.422636    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:48.422736    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:48.422736    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:48.422736    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:48 GMT
	I1028 12:37:48.422736    5536 round_trippers.go:580]     Audit-Id: a4b8fbba-5827-4e92-9107-47b52443cd53
	I1028 12:37:48.422736    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:48.422736    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:48.422736    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:48.422941    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:48.423483    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:48.917344    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:48.917877    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:48.917877    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:48.917877    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:48.921988    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:48.921988    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:48.921988    5536 round_trippers.go:580]     Audit-Id: d2895e7b-e5fc-42bd-8dde-df191967c52a
	I1028 12:37:48.921988    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:48.921988    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:48.921988    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:48.921988    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:48.921988    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:48 GMT
	I1028 12:37:48.922564    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:49.417207    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:49.417207    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:49.417207    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:49.417207    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:49.422680    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:49.422680    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:49.422829    5536 round_trippers.go:580]     Audit-Id: 9afcc0ce-6f3b-43c6-9d69-77b26d48829d
	I1028 12:37:49.422829    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:49.422829    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:49.422829    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:49.422829    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:49.422829    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:49 GMT
	I1028 12:37:49.422958    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:49.917320    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:49.917320    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:49.917320    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:49.917320    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:49.922860    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:49.922860    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:49.922860    5536 round_trippers.go:580]     Audit-Id: a3119704-f067-4464-9ed5-befc4e973211
	I1028 12:37:49.922860    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:49.922860    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:49.922860    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:49.922860    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:49.923833    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:49 GMT
	I1028 12:37:49.923833    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:50.418192    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:50.418192    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:50.418192    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:50.418192    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:50.423961    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:50.424066    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:50.424066    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:50 GMT
	I1028 12:37:50.424066    5536 round_trippers.go:580]     Audit-Id: fe0775c1-cf32-4aa2-9b7e-45be79b0a419
	I1028 12:37:50.424066    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:50.424066    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:50.424066    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:50.424066    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:50.424457    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:50.424772    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:50.917373    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:50.917373    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:50.917373    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:50.917373    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:50.922342    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:50.922430    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:50.922482    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:50.922482    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:50.922482    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:50.922482    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:50 GMT
	I1028 12:37:50.922482    5536 round_trippers.go:580]     Audit-Id: 3c40a406-e32f-4f38-9b50-ecad14338b90
	I1028 12:37:50.922482    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:50.922746    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:51.417190    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:51.417190    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:51.417190    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:51.417190    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:51.422738    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:51.422738    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:51.422738    5536 round_trippers.go:580]     Audit-Id: a635f7e2-e822-44e6-8c42-0ba700ca8814
	I1028 12:37:51.422738    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:51.422738    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:51.422738    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:51.422738    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:51.422738    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:51 GMT
	I1028 12:37:51.423150    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:51.917349    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:51.917349    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:51.917349    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:51.917349    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:51.921498    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:51.921498    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:51.921498    5536 round_trippers.go:580]     Audit-Id: b76e229c-3dff-4b03-bc46-fc353bf321c0
	I1028 12:37:51.921498    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:51.921498    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:51.921649    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:51.921649    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:51.921649    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:51 GMT
	I1028 12:37:51.921979    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:52.418023    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:52.418023    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:52.418023    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:52.418023    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:52.423508    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:52.423583    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:52.423583    5536 round_trippers.go:580]     Audit-Id: 8d506957-7b45-4f41-b468-bcd5fa83d12a
	I1028 12:37:52.423583    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:52.423583    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:52.423583    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:52.423583    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:52.423583    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:52 GMT
	I1028 12:37:52.424149    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:52.917152    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:52.917152    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:52.917152    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:52.917152    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:52.922991    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:52.922991    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:52.922991    5536 round_trippers.go:580]     Audit-Id: 011922bc-880f-488f-a4da-532ee8f0bf2b
	I1028 12:37:52.923081    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:52.923081    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:52.923081    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:52.923081    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:52.923081    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:52 GMT
	I1028 12:37:52.923572    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:52.923834    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:53.417623    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:53.417623    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:53.417722    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:53.417722    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:53.421753    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:53.421753    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:53.421753    5536 round_trippers.go:580]     Audit-Id: 80fb923e-8edc-40ff-a850-fbf6986b0c07
	I1028 12:37:53.421840    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:53.421840    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:53.421840    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:53.421840    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:53.421840    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:53 GMT
	I1028 12:37:53.422354    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:53.917230    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:53.917230    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:53.917230    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:53.917230    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:53.922722    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:53.923259    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:53.923259    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:53 GMT
	I1028 12:37:53.923259    5536 round_trippers.go:580]     Audit-Id: c95063d9-1566-4250-a4fe-85c56f2c8ad0
	I1028 12:37:53.923259    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:53.923259    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:53.923352    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:53.923352    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:53.923570    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:54.417126    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:54.417949    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:54.417949    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:54.417949    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:54.421255    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:54.422102    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:54.422102    5536 round_trippers.go:580]     Audit-Id: 4e9882e0-61eb-4418-b1fe-29e24956b5c5
	I1028 12:37:54.422102    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:54.422102    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:54.422102    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:54.422102    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:54.422102    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:54 GMT
	I1028 12:37:54.422536    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:54.917865    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:54.917865    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:54.917865    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:54.917865    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:54.938043    5536 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1028 12:37:54.938043    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:54.938043    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:54.938143    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:54.938143    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:54.938143    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:54.938143    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:54 GMT
	I1028 12:37:54.938143    5536 round_trippers.go:580]     Audit-Id: 9b1b7075-8a96-4adb-a6b2-29491215d2c9
	I1028 12:37:54.938550    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:54.939107    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:55.417146    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:55.417146    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:55.417146    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:55.417146    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:55.422402    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:55.422861    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:55.422861    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:55 GMT
	I1028 12:37:55.422861    5536 round_trippers.go:580]     Audit-Id: 6ab632ea-c495-4d17-80dc-77e29d268631
	I1028 12:37:55.422861    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:55.422861    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:55.422861    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:55.422861    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:55.423021    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:55.918660    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:55.918660    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:55.918660    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:55.918793    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:55.925641    5536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 12:37:55.925641    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:55.925641    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:55.925641    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:55 GMT
	I1028 12:37:55.925641    5536 round_trippers.go:580]     Audit-Id: 0f6fbcf3-7f4d-4f45-b925-27eb5c3a105a
	I1028 12:37:55.925641    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:55.925641    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:55.925641    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:55.925641    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:56.418369    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:56.418369    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:56.418369    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:56.418369    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:56.422261    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:56.422896    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:56.422896    5536 round_trippers.go:580]     Audit-Id: 2ef000cc-4270-44b8-b69e-f354a2858c86
	I1028 12:37:56.422896    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:56.422896    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:56.422896    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:56.423010    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:56.423010    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:56 GMT
	I1028 12:37:56.423341    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:56.917646    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:56.917728    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:56.917728    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:56.917728    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:56.921557    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:56.921557    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:56.921557    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:56.921557    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:56.921557    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:56.921557    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:56 GMT
	I1028 12:37:56.921557    5536 round_trippers.go:580]     Audit-Id: 771053e1-b0fd-4f9b-a932-a49b8e6a2736
	I1028 12:37:56.921557    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:56.921982    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:57.417561    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:57.417561    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:57.417561    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:57.417561    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:57.422350    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:57.422350    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:57.422350    5536 round_trippers.go:580]     Audit-Id: 07f7246f-d11f-4a63-b228-bad5b3d4c29c
	I1028 12:37:57.422350    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:57.422350    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:57.422350    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:57.422350    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:57.422350    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:57 GMT
	I1028 12:37:57.422678    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:57.422867    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:57.917236    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:57.917811    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:57.917811    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:57.917811    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:57.922375    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:57.922518    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:57.922518    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:57.922577    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:57 GMT
	I1028 12:37:57.922577    5536 round_trippers.go:580]     Audit-Id: dea22cd0-853c-4ecd-981b-6df3f4753b3d
	I1028 12:37:57.922577    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:57.922577    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:57.922647    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:57.923083    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:58.417249    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:58.417772    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:58.417772    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:58.417772    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:58.421269    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:58.422257    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:58.422257    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:58 GMT
	I1028 12:37:58.422257    5536 round_trippers.go:580]     Audit-Id: eb19e890-4497-41c2-bb2c-b8d436b9fecf
	I1028 12:37:58.422257    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:58.422257    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:58.422257    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:58.422257    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:58.422406    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:58.917800    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:58.917800    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:58.917800    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:58.917800    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:58.922886    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:58.922886    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:58.922886    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:58.922886    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:58.922886    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:58 GMT
	I1028 12:37:58.922886    5536 round_trippers.go:580]     Audit-Id: e6e7cef2-5007-45d9-a6cd-67e56f69c797
	I1028 12:37:58.922886    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:58.922886    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:58.922886    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"412","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1028 12:37:58.923975    5536 node_ready.go:49] node "multinode-071500" has status "Ready":"True"
	I1028 12:37:58.923975    5536 node_ready.go:38] duration metric: took 22.0068894s for node "multinode-071500" to be "Ready" ...
	I1028 12:37:58.923975    5536 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:37:58.923975    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods
	I1028 12:37:58.923975    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:58.923975    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:58.923975    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:58.928398    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:58.928454    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:58.928454    5536 round_trippers.go:580]     Audit-Id: d5aa28a9-cefd-454d-b363-90d94b2ea667
	I1028 12:37:58.928454    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:58.928454    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:58.928454    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:58.928454    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:58.928454    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:58 GMT
	I1028 12:37:58.929755    5536 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"421"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"417","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65510 chars]
	I1028 12:37:58.935700    5536 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j8vdn" in "kube-system" namespace to be "Ready" ...
	I1028 12:37:58.935700    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j8vdn
	I1028 12:37:58.935700    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:58.935700    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:58.935700    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:58.939408    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:58.939408    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:58.939473    5536 round_trippers.go:580]     Audit-Id: 3e32ea8e-e381-45a8-bc05-294c5c14736c
	I1028 12:37:58.939473    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:58.939473    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:58.939473    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:58.939473    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:58.939473    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:58 GMT
	I1028 12:37:58.939589    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"417","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I1028 12:37:58.940265    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:58.940323    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:58.940323    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:58.940323    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:58.946996    5536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 12:37:58.946996    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:58.947112    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:58 GMT
	I1028 12:37:58.947112    5536 round_trippers.go:580]     Audit-Id: 312bc620-8df9-421b-8df6-0355a2b96de8
	I1028 12:37:58.947112    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:58.947112    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:58.947112    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:58.947112    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:58.947112    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"412","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1028 12:37:59.436166    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j8vdn
	I1028 12:37:59.436166    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:59.436166    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:59.436166    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:59.442166    5536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 12:37:59.442166    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:59.442166    5536 round_trippers.go:580]     Audit-Id: cc344c01-b200-45c5-98d4-948843aba777
	I1028 12:37:59.442166    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:59.442166    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:59.442166    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:59.442166    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:59.442166    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:59 GMT
	I1028 12:37:59.442166    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"417","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I1028 12:37:59.443405    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:59.443624    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:59.443624    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:59.443624    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:59.451930    5536 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 12:37:59.451930    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:59.451930    5536 round_trippers.go:580]     Audit-Id: 9b011c79-161f-4716-8904-e39bdcae869a
	I1028 12:37:59.451930    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:59.451930    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:59.451930    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:59.451930    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:59.451930    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:59 GMT
	I1028 12:37:59.452517    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"412","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1028 12:37:59.936552    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j8vdn
	I1028 12:37:59.936552    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:59.936552    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:59.936552    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:59.941667    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:59.941667    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:59.941667    5536 round_trippers.go:580]     Audit-Id: 12135fb7-c562-411f-bfa5-9fb521c9c5ba
	I1028 12:37:59.941667    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:59.941667    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:59.941751    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:59.941751    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:59.941751    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:59 GMT
	I1028 12:37:59.941936    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"417","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I1028 12:37:59.942932    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:59.943016    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:59.943016    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:59.943016    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:59.945709    5536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 12:37:59.946409    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:59.946409    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:59.946409    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:59.946409    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:59.946409    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:59.946409    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:59 GMT
	I1028 12:37:59.946409    5536 round_trippers.go:580]     Audit-Id: 328dbcfa-0af2-4428-9ada-a16fb320c79d
	I1028 12:37:59.951062    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"412","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1028 12:38:00.436528    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j8vdn
	I1028 12:38:00.436528    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:00.436528    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:00.436528    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:00.441027    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:38:00.441087    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:00.441087    5536 round_trippers.go:580]     Audit-Id: cf08cfaa-9386-4274-8b50-b3dd656d7b5d
	I1028 12:38:00.441087    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:00.441087    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:00.441087    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:00.441087    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:00.441087    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:00 GMT
	I1028 12:38:00.441087    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"437","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7063 chars]
	I1028 12:38:00.441856    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:00.441856    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:00.441856    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:00.441856    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:00.445490    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:38:00.445490    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:00.445490    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:00.445490    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:00.445490    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:00.445490    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:00 GMT
	I1028 12:38:00.445490    5536 round_trippers.go:580]     Audit-Id: c344ee00-a251-40bf-80a7-715814066b3e
	I1028 12:38:00.445490    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:00.446154    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"412","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1028 12:38:00.936690    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j8vdn
	I1028 12:38:00.936690    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:00.936690    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:00.936690    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:00.941706    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:38:00.941770    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:00.941770    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:00.941770    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:00.941770    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:00 GMT
	I1028 12:38:00.941770    5536 round_trippers.go:580]     Audit-Id: 22d1eea9-2802-42eb-8ee9-90466b3d8269
	I1028 12:38:00.941770    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:00.941770    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:00.941965    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"437","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7063 chars]
	I1028 12:38:00.941965    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:00.942705    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:00.942705    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:00.942705    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:00.948102    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:38:00.948102    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:00.948102    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:00.948102    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:00.948102    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:00 GMT
	I1028 12:38:00.948102    5536 round_trippers.go:580]     Audit-Id: d6fbc107-106f-4bcb-bc4d-95d89a1a2e67
	I1028 12:38:00.948102    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:00.948102    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:00.948455    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"412","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1028 12:38:00.948719    5536 pod_ready.go:103] pod "coredns-7c65d6cfc9-j8vdn" in "kube-system" namespace has status "Ready":"False"
	I1028 12:38:01.436109    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j8vdn
	I1028 12:38:01.436109    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.436109    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.436109    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.466757    5536 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I1028 12:38:01.466820    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.466820    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.466820    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.466820    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.466820    5536 round_trippers.go:580]     Audit-Id: e7b8af39-fd45-4be2-bc85-29997b1b880e
	I1028 12:38:01.466820    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.466820    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.471844    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"443","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6834 chars]
	I1028 12:38:01.472523    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:01.472523    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.472523    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.472523    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.476286    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:38:01.476534    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.476534    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.476534    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.476534    5536 round_trippers.go:580]     Audit-Id: 23a56b13-6a26-4235-9a63-dfe8c89b63ba
	I1028 12:38:01.476534    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.476607    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.476669    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.476916    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"447","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1028 12:38:01.477446    5536 pod_ready.go:93] pod "coredns-7c65d6cfc9-j8vdn" in "kube-system" namespace has status "Ready":"True"
	I1028 12:38:01.477446    5536 pod_ready.go:82] duration metric: took 2.5417173s for pod "coredns-7c65d6cfc9-j8vdn" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.477508    5536 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-w5gxr" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.477569    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-w5gxr
	I1028 12:38:01.477631    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.477631    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.477692    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.482391    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:38:01.482883    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.482883    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.482883    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.482883    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.482883    5536 round_trippers.go:580]     Audit-Id: 479da8b5-f766-48ce-9af0-5be9317d8663
	I1028 12:38:01.482883    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.482883    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.483054    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-w5gxr","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2c6bd789-f3d5-4c3c-9f69-6e5dbe1db852","resourceVersion":"450","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6834 chars]
	I1028 12:38:01.483737    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:01.483737    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.483737    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.483737    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.491656    5536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 12:38:01.491682    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.491682    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.491682    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.491682    5536 round_trippers.go:580]     Audit-Id: 916170ed-c647-4f63-b92c-5e3e16caaa21
	I1028 12:38:01.491682    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.491682    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.491682    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.491861    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"447","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1028 12:38:01.492458    5536 pod_ready.go:93] pod "coredns-7c65d6cfc9-w5gxr" in "kube-system" namespace has status "Ready":"True"
	I1028 12:38:01.492458    5536 pod_ready.go:82] duration metric: took 14.9497ms for pod "coredns-7c65d6cfc9-w5gxr" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.492458    5536 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-071500" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.492458    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-071500
	I1028 12:38:01.492458    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.492458    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.492458    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.496063    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:38:01.496217    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.496217    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.496217    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.496217    5536 round_trippers.go:580]     Audit-Id: 710f37c4-f0cb-4026-88c1-74a59474a51d
	I1028 12:38:01.496217    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.496217    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.496217    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.497044    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-071500","namespace":"kube-system","uid":"0def4362-6242-450b-a917-ea0720c76929","resourceVersion":"387","creationTimestamp":"2024-10-28T12:37:30Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.244.98:2379","kubernetes.io/config.hash":"f5222b65c24a069db70fce37c92f9fa9","kubernetes.io/config.mirror":"f5222b65c24a069db70fce37c92f9fa9","kubernetes.io/config.seen":"2024-10-28T12:37:30.614257114Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6465 chars]
	I1028 12:38:01.497627    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:01.497686    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.497686    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.497686    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.500045    5536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 12:38:01.500045    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.500045    5536 round_trippers.go:580]     Audit-Id: 5be1547d-3fae-4e6b-8fcc-29a0a3b63357
	I1028 12:38:01.500045    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.500045    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.500045    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.500045    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.500045    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.501168    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"447","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1028 12:38:01.501168    5536 pod_ready.go:93] pod "etcd-multinode-071500" in "kube-system" namespace has status "Ready":"True"
	I1028 12:38:01.501168    5536 pod_ready.go:82] duration metric: took 8.7107ms for pod "etcd-multinode-071500" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.501168    5536 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-071500" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.501168    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-071500
	I1028 12:38:01.501168    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.501168    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.501168    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.505088    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:38:01.505352    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.505352    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.505352    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.505352    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.505352    5536 round_trippers.go:580]     Audit-Id: 064abf4a-e977-4559-a65a-6a966a18f532
	I1028 12:38:01.505352    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.505352    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.505675    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-071500","namespace":"kube-system","uid":"0216da7d-e0eb-403f-927d-5bcd780c85bb","resourceVersion":"354","creationTimestamp":"2024-10-28T12:37:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.244.98:8443","kubernetes.io/config.hash":"2d85a7b9464ac245c51684738092f57c","kubernetes.io/config.mirror":"2d85a7b9464ac245c51684738092f57c","kubernetes.io/config.seen":"2024-10-28T12:37:22.124230458Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I1028 12:38:01.506307    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:01.506361    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.506361    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.506361    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.512077    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:38:01.512077    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.512077    5536 round_trippers.go:580]     Audit-Id: 1f0cbc0b-ee0b-4d71-85cc-098e41bf28a3
	I1028 12:38:01.512077    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.512077    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.512077    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.512077    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.512162    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.512462    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"447","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1028 12:38:01.512657    5536 pod_ready.go:93] pod "kube-apiserver-multinode-071500" in "kube-system" namespace has status "Ready":"True"
	I1028 12:38:01.512657    5536 pod_ready.go:82] duration metric: took 11.4886ms for pod "kube-apiserver-multinode-071500" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.512657    5536 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-071500" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.512657    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-071500
	I1028 12:38:01.512657    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.512657    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.512657    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.515332    5536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 12:38:01.515332    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.515332    5536 round_trippers.go:580]     Audit-Id: 9102c7e3-f1e7-4fe2-af82-41d579b8b3bb
	I1028 12:38:01.515332    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.515332    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.515332    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.515332    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.515332    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.515614    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-071500","namespace":"kube-system","uid":"f4f02743-40df-46cc-b3bf-39b846325812","resourceVersion":"383","creationTimestamp":"2024-10-28T12:37:29Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"55a02bb632fe7724e43bb68086d66024","kubernetes.io/config.mirror":"55a02bb632fe7724e43bb68086d66024","kubernetes.io/config.seen":"2024-10-28T12:37:22.124231758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I1028 12:38:01.516207    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:01.516251    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.516251    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.516251    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.523038    5536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 12:38:01.523038    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.523038    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.523038    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.523038    5536 round_trippers.go:580]     Audit-Id: cdc0c1c2-9b03-41d3-b136-b7341f9f1e40
	I1028 12:38:01.523038    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.523038    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.523038    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.523038    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"447","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1028 12:38:01.523038    5536 pod_ready.go:93] pod "kube-controller-manager-multinode-071500" in "kube-system" namespace has status "Ready":"True"
	I1028 12:38:01.523038    5536 pod_ready.go:82] duration metric: took 10.3807ms for pod "kube-controller-manager-multinode-071500" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.523038    5536 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tgw89" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.637745    5536 request.go:632] Waited for 114.7056ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tgw89
	I1028 12:38:01.637745    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tgw89
	I1028 12:38:01.637745    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.637745    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.637745    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.642378    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:38:01.642378    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.642378    5536 round_trippers.go:580]     Audit-Id: 86fee2cd-d6de-4823-85fb-f29c06e30e96
	I1028 12:38:01.642378    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.642378    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.642378    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.642378    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.642378    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.642686    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tgw89","generateName":"kube-proxy-","namespace":"kube-system","uid":"fe651213-d8ad-43ae-b151-dd8ad6cd1e8d","resourceVersion":"381","creationTimestamp":"2024-10-28T12:37:35Z","labels":{"controller-revision-hash":"77987969cc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2e018c8b-485d-4a2a-bf11-b2a0153acdac","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e018c8b-485d-4a2a-bf11-b2a0153acdac\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6194 chars]
	I1028 12:38:01.836385    5536 request.go:632] Waited for 192.79ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:01.836385    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:01.836910    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.837007    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.837007    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.841263    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:38:01.841263    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.841364    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.841364    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.841364    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.841364    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.841364    5536 round_trippers.go:580]     Audit-Id: 5a2e012c-f53e-4358-9839-c1517c54e5ad
	I1028 12:38:01.841364    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.841606    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"447","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1028 12:38:01.841606    5536 pod_ready.go:93] pod "kube-proxy-tgw89" in "kube-system" namespace has status "Ready":"True"
	I1028 12:38:01.841606    5536 pod_ready.go:82] duration metric: took 318.5641ms for pod "kube-proxy-tgw89" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.841606    5536 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-071500" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:02.037098    5536 request.go:632] Waited for 195.4897ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-071500
	I1028 12:38:02.037098    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-071500
	I1028 12:38:02.037098    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:02.037098    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:02.037098    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:02.041197    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:38:02.041197    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:02.041197    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:02.041197    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:02.041197    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:02.041197    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:02.041197    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:02 GMT
	I1028 12:38:02.041197    5536 round_trippers.go:580]     Audit-Id: 43d311f4-0aca-4b0f-9356-e74ca0e624ae
	I1028 12:38:02.042610    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-071500","namespace":"kube-system","uid":"c7b70910-55e3-4e8d-a167-f30516fc8241","resourceVersion":"389","creationTimestamp":"2024-10-28T12:37:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ee6f7d791319591d1ac968147343724b","kubernetes.io/config.mirror":"ee6f7d791319591d1ac968147343724b","kubernetes.io/config.seen":"2024-10-28T12:37:30.614268714Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I1028 12:38:02.236391    5536 request.go:632] Waited for 193.1359ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:02.236391    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:02.236391    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:02.236391    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:02.236391    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:02.240300    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:38:02.241062    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:02.241062    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:02.241062    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:02.241062    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:02.241062    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:02 GMT
	I1028 12:38:02.241062    5536 round_trippers.go:580]     Audit-Id: 4c274be6-0927-4541-8727-5995e02981bc
	I1028 12:38:02.241062    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:02.241595    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"447","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1028 12:38:02.241595    5536 pod_ready.go:93] pod "kube-scheduler-multinode-071500" in "kube-system" namespace has status "Ready":"True"
	I1028 12:38:02.241595    5536 pod_ready.go:82] duration metric: took 399.9852ms for pod "kube-scheduler-multinode-071500" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:02.241595    5536 pod_ready.go:39] duration metric: took 3.3175825s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:38:02.242142    5536 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:38:02.254777    5536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:38:02.286294    5536 command_runner.go:130] > 2177
	I1028 12:38:02.286431    5536 api_server.go:72] duration metric: took 25.8856364s to wait for apiserver process to appear ...
	I1028 12:38:02.286431    5536 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:38:02.286496    5536 api_server.go:253] Checking apiserver healthz at https://172.27.244.98:8443/healthz ...
	I1028 12:38:02.298825    5536 api_server.go:279] https://172.27.244.98:8443/healthz returned 200:
	ok
	I1028 12:38:02.299052    5536 round_trippers.go:463] GET https://172.27.244.98:8443/version
	I1028 12:38:02.299162    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:02.299162    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:02.299162    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:02.301504    5536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 12:38:02.301578    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:02.301578    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:02.301578    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:02.301578    5536 round_trippers.go:580]     Content-Length: 263
	I1028 12:38:02.301578    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:02 GMT
	I1028 12:38:02.301578    5536 round_trippers.go:580]     Audit-Id: 1a5cbb00-3b71-4056-8f12-536285df3a42
	I1028 12:38:02.301578    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:02.301578    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:02.301578    5536 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.2",
	  "gitCommit": "5864a4677267e6adeae276ad85882a8714d69d9d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-10-22T20:28:14Z",
	  "goVersion": "go1.22.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1028 12:38:02.301578    5536 api_server.go:141] control plane version: v1.31.2
	I1028 12:38:02.301578    5536 api_server.go:131] duration metric: took 15.1461ms to wait for apiserver health ...
	I1028 12:38:02.301578    5536 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:38:02.437412    5536 request.go:632] Waited for 135.8326ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods
	I1028 12:38:02.437412    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods
	I1028 12:38:02.437412    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:02.437412    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:02.437412    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:02.445416    5536 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 12:38:02.445416    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:02.445416    5536 round_trippers.go:580]     Audit-Id: 1a1f4e6c-2dc4-4c66-9a23-d40fcfa0c669
	I1028 12:38:02.445416    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:02.445416    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:02.445416    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:02.445416    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:02.445416    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:02 GMT
	I1028 12:38:02.447593    5536 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"443","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65755 chars]
	I1028 12:38:02.451117    5536 system_pods.go:59] 9 kube-system pods found
	I1028 12:38:02.451167    5536 system_pods.go:61] "coredns-7c65d6cfc9-j8vdn" [72f8f3d0-e08c-44f1-8f74-6f5685c5bf75] Running
	I1028 12:38:02.451199    5536 system_pods.go:61] "coredns-7c65d6cfc9-w5gxr" [2c6bd789-f3d5-4c3c-9f69-6e5dbe1db852] Running
	I1028 12:38:02.451199    5536 system_pods.go:61] "etcd-multinode-071500" [0def4362-6242-450b-a917-ea0720c76929] Running
	I1028 12:38:02.451199    5536 system_pods.go:61] "kindnet-c7z7c" [9151b032-96d2-40e4-b4e6-6bac4ccb5180] Running
	I1028 12:38:02.451199    5536 system_pods.go:61] "kube-apiserver-multinode-071500" [0216da7d-e0eb-403f-927d-5bcd780c85bb] Running
	I1028 12:38:02.451199    5536 system_pods.go:61] "kube-controller-manager-multinode-071500" [f4f02743-40df-46cc-b3bf-39b846325812] Running
	I1028 12:38:02.451199    5536 system_pods.go:61] "kube-proxy-tgw89" [fe651213-d8ad-43ae-b151-dd8ad6cd1e8d] Running
	I1028 12:38:02.451199    5536 system_pods.go:61] "kube-scheduler-multinode-071500" [c7b70910-55e3-4e8d-a167-f30516fc8241] Running
	I1028 12:38:02.451199    5536 system_pods.go:61] "storage-provisioner" [3041ff50-d6af-4c68-803f-78a36f22c000] Running
	I1028 12:38:02.451199    5536 system_pods.go:74] duration metric: took 149.6197ms to wait for pod list to return data ...
	I1028 12:38:02.451304    5536 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:38:02.637105    5536 request.go:632] Waited for 185.6853ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.244.98:8443/api/v1/namespaces/default/serviceaccounts
	I1028 12:38:02.637105    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/default/serviceaccounts
	I1028 12:38:02.637105    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:02.637105    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:02.637105    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:02.643330    5536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 12:38:02.643330    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:02.643330    5536 round_trippers.go:580]     Content-Length: 261
	I1028 12:38:02.643414    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:02 GMT
	I1028 12:38:02.643414    5536 round_trippers.go:580]     Audit-Id: a99a47ba-cfa3-4bec-a954-5e60cb817dce
	I1028 12:38:02.643414    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:02.643414    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:02.643414    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:02.643414    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:02.643414    5536 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"576087d0-542d-474e-b0d9-e730f2067d88","resourceVersion":"355","creationTimestamp":"2024-10-28T12:37:35Z"}}]}
	I1028 12:38:02.643923    5536 default_sa.go:45] found service account: "default"
	I1028 12:38:02.644001    5536 default_sa.go:55] duration metric: took 192.6947ms for default service account to be created ...
	I1028 12:38:02.644001    5536 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:38:02.836303    5536 request.go:632] Waited for 192.1699ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods
	I1028 12:38:02.836303    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods
	I1028 12:38:02.836303    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:02.836303    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:02.836303    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:02.841615    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:38:02.841615    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:02.841615    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:02.841615    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:02.841615    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:02.841615    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:02.841615    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:02 GMT
	I1028 12:38:02.841615    5536 round_trippers.go:580]     Audit-Id: dcf84d68-734d-42ef-8426-f87cf5c832fc
	I1028 12:38:02.842955    5536 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"443","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65755 chars]
	I1028 12:38:02.846144    5536 system_pods.go:86] 9 kube-system pods found
	I1028 12:38:02.846217    5536 system_pods.go:89] "coredns-7c65d6cfc9-j8vdn" [72f8f3d0-e08c-44f1-8f74-6f5685c5bf75] Running
	I1028 12:38:02.846217    5536 system_pods.go:89] "coredns-7c65d6cfc9-w5gxr" [2c6bd789-f3d5-4c3c-9f69-6e5dbe1db852] Running
	I1028 12:38:02.846217    5536 system_pods.go:89] "etcd-multinode-071500" [0def4362-6242-450b-a917-ea0720c76929] Running
	I1028 12:38:02.846217    5536 system_pods.go:89] "kindnet-c7z7c" [9151b032-96d2-40e4-b4e6-6bac4ccb5180] Running
	I1028 12:38:02.846217    5536 system_pods.go:89] "kube-apiserver-multinode-071500" [0216da7d-e0eb-403f-927d-5bcd780c85bb] Running
	I1028 12:38:02.846217    5536 system_pods.go:89] "kube-controller-manager-multinode-071500" [f4f02743-40df-46cc-b3bf-39b846325812] Running
	I1028 12:38:02.846290    5536 system_pods.go:89] "kube-proxy-tgw89" [fe651213-d8ad-43ae-b151-dd8ad6cd1e8d] Running
	I1028 12:38:02.846290    5536 system_pods.go:89] "kube-scheduler-multinode-071500" [c7b70910-55e3-4e8d-a167-f30516fc8241] Running
	I1028 12:38:02.846290    5536 system_pods.go:89] "storage-provisioner" [3041ff50-d6af-4c68-803f-78a36f22c000] Running
	I1028 12:38:02.846290    5536 system_pods.go:126] duration metric: took 202.2873ms to wait for k8s-apps to be running ...
	I1028 12:38:02.846290    5536 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:38:02.856840    5536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:38:02.887240    5536 system_svc.go:56] duration metric: took 40.8164ms WaitForService to wait for kubelet
	I1028 12:38:02.887240    5536 kubeadm.go:582] duration metric: took 26.4864384s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:38:02.887240    5536 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:38:03.037033    5536 request.go:632] Waited for 149.7908ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.244.98:8443/api/v1/nodes
	I1028 12:38:03.037033    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes
	I1028 12:38:03.037033    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:03.037033    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:03.037033    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:03.044866    5536 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 12:38:03.044913    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:03.044913    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:03.044913    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:03 GMT
	I1028 12:38:03.044913    5536 round_trippers.go:580]     Audit-Id: 1719c4ae-81ee-476f-9764-d99cf06029f3
	I1028 12:38:03.044913    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:03.044913    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:03.044913    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:03.045746    5536 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"447","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I1028 12:38:03.046323    5536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:38:03.046398    5536 node_conditions.go:123] node cpu capacity is 2
	I1028 12:38:03.046398    5536 node_conditions.go:105] duration metric: took 159.156ms to run NodePressure ...
	I1028 12:38:03.046470    5536 start.go:241] waiting for startup goroutines ...
	I1028 12:38:03.046470    5536 start.go:246] waiting for cluster config update ...
	I1028 12:38:03.046470    5536 start.go:255] writing updated cluster config ...
	I1028 12:38:03.057918    5536 ssh_runner.go:195] Run: rm -f paused
	I1028 12:38:03.228333    5536 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:38:03.233952    5536 out.go:177] * Done! kubectl is now configured to use "multinode-071500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.324372202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.325471307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.416348959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.427572414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.427591214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.427715015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.430045527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.430105127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.430151227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.430264328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:37:59 multinode-071500 cri-dockerd[1325]: time="2024-10-28T12:37:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cc34e719069f7a9d34907f483d865421c54e40c613bc542e28e61534eddf3683/resolv.conf as [nameserver 172.27.240.1]"
	Oct 28 12:37:59 multinode-071500 cri-dockerd[1325]: time="2024-10-28T12:37:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6591463df0d1e459073bfa0e55c5eb78f4168eb6c4c84321aaf32c66f9f7a546/resolv.conf as [nameserver 172.27.240.1]"
	Oct 28 12:37:59 multinode-071500 cri-dockerd[1325]: time="2024-10-28T12:37:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4aa32a83149a0ff38b11c6d6629fb377d8be7307560fb270d10f9fd319ab26cf/resolv.conf as [nameserver 172.27.240.1]"
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.939179517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.939436319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.939464620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.939580521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:38:00 multinode-071500 dockerd[1434]: time="2024-10-28T12:38:00.053257324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 12:38:00 multinode-071500 dockerd[1434]: time="2024-10-28T12:38:00.053484527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 12:38:00 multinode-071500 dockerd[1434]: time="2024-10-28T12:38:00.053522827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:38:00 multinode-071500 dockerd[1434]: time="2024-10-28T12:38:00.053662128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:38:00 multinode-071500 dockerd[1434]: time="2024-10-28T12:38:00.090507298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 12:38:00 multinode-071500 dockerd[1434]: time="2024-10-28T12:38:00.091069303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 12:38:00 multinode-071500 dockerd[1434]: time="2024-10-28T12:38:00.091418407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:38:00 multinode-071500 dockerd[1434]: time="2024-10-28T12:38:00.096615559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d9a4ff27464a9       c69fa2e9cbf5f                                                                              25 seconds ago       Running             coredns                   0                   4aa32a83149a0       coredns-7c65d6cfc9-w5gxr
	68b45017566f4       6e38f40d628db                                                                              25 seconds ago       Running             storage-provisioner       0                   6591463df0d1e       storage-provisioner
	f3f03e6599ba5       c69fa2e9cbf5f                                                                              25 seconds ago       Running             coredns                   0                   cc34e719069f7       coredns-7c65d6cfc9-j8vdn
	65e0fb44dec2a       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387   41 seconds ago       Running             kindnet-cni               0                   8092d681a7fcf       kindnet-c7z7c
	7869c78851adc       505d571f5fd56                                                                              48 seconds ago       Running             kube-proxy                0                   5bd6b76949e46       kube-proxy-tgw89
	194625c1d055f       9499c9960544e                                                                              About a minute ago   Running             kube-apiserver            0                   255a07694cdd3       kube-apiserver-multinode-071500
	dd6a29921aeb0       0486b6c53a1b5                                                                              About a minute ago   Running             kube-controller-manager   0                   e5a544d2ba02d       kube-controller-manager-multinode-071500
	e4b9f1d00646c       2e96e5913fc06                                                                              About a minute ago   Running             etcd                      0                   3741c4710b9e1       etcd-multinode-071500
	2f85a96248571       847c7bc1a5418                                                                              About a minute ago   Running             kube-scheduler            0                   5bc248af891ae       kube-scheduler-multinode-071500
	
	
	==> coredns [d9a4ff27464a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f3f03e6599ba] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               multinode-071500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-071500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=multinode-071500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T12_37_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 12:37:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-071500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 12:38:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 12:38:01 +0000   Mon, 28 Oct 2024 12:37:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 12:38:01 +0000   Mon, 28 Oct 2024 12:37:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 12:38:01 +0000   Mon, 28 Oct 2024 12:37:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 12:38:01 +0000   Mon, 28 Oct 2024 12:37:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.244.98
	  Hostname:    multinode-071500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 185d9c6ed47f4a5096378807b9fa20dc
	  System UUID:                01909705-6ec2-2e4c-a584-38b558b009f0
	  Boot ID:                    68ba9dca-d12b-4823-946f-4b1508951028
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-j8vdn                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     48s
	  kube-system                 coredns-7c65d6cfc9-w5gxr                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     48s
	  kube-system                 etcd-multinode-071500                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         54s
	  kube-system                 kindnet-c7z7c                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      49s
	  kube-system                 kube-apiserver-multinode-071500             250m (12%)    0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-controller-manager-multinode-071500    200m (10%)    0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-proxy-tgw89                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 kube-scheduler-multinode-071500             100m (5%)     0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 47s                kube-proxy       
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node multinode-071500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node multinode-071500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x7 over 62s)  kubelet          Node multinode-071500 status is now: NodeHasSufficientPID
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  54s                kubelet          Node multinode-071500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s                kubelet          Node multinode-071500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s                kubelet          Node multinode-071500 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node multinode-071500 event: Registered Node multinode-071500 in Controller
	  Normal  NodeReady                26s                kubelet          Node multinode-071500 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.909337] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct28 12:36] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.218831] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[ +27.387805] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.118489] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.568429] systemd-fstab-generator[1039]: Ignoring "noauto" option for root device
	[  +0.231942] systemd-fstab-generator[1051]: Ignoring "noauto" option for root device
	[  +0.243355] systemd-fstab-generator[1065]: Ignoring "noauto" option for root device
	[  +2.893672] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.207419] systemd-fstab-generator[1290]: Ignoring "noauto" option for root device
	[  +0.208207] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.281567] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[Oct28 12:37] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +0.115357] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.159710] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +7.389431] systemd-fstab-generator[1830]: Ignoring "noauto" option for root device
	[  +0.127450] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.575684] systemd-fstab-generator[2240]: Ignoring "noauto" option for root device
	[  +0.168990] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.191449] systemd-fstab-generator[2346]: Ignoring "noauto" option for root device
	[  +0.118019] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.381667] hrtimer: interrupt took 2384119 ns
	[  +0.687538] kauditd_printk_skb: 51 callbacks suppressed
	
	
	==> etcd [e4b9f1d00646] <==
	{"level":"info","ts":"2024-10-28T12:37:24.491535Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"9ff7ddd6e6d528cf","initial-advertise-peer-urls":["https://172.27.244.98:2380"],"listen-peer-urls":["https://172.27.244.98:2380"],"advertise-client-urls":["https://172.27.244.98:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.244.98:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T12:37:24.491608Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T12:37:25.037915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ff7ddd6e6d528cf is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-28T12:37:25.038166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ff7ddd6e6d528cf became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-28T12:37:25.038389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ff7ddd6e6d528cf received MsgPreVoteResp from 9ff7ddd6e6d528cf at term 1"}
	{"level":"info","ts":"2024-10-28T12:37:25.038589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ff7ddd6e6d528cf became candidate at term 2"}
	{"level":"info","ts":"2024-10-28T12:37:25.040847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ff7ddd6e6d528cf received MsgVoteResp from 9ff7ddd6e6d528cf at term 2"}
	{"level":"info","ts":"2024-10-28T12:37:25.041057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ff7ddd6e6d528cf became leader at term 2"}
	{"level":"info","ts":"2024-10-28T12:37:25.041267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9ff7ddd6e6d528cf elected leader 9ff7ddd6e6d528cf at term 2"}
	{"level":"info","ts":"2024-10-28T12:37:25.048049Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:37:25.055113Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9ff7ddd6e6d528cf","local-member-attributes":"{Name:multinode-071500 ClientURLs:[https://172.27.244.98:2379]}","request-path":"/0/members/9ff7ddd6e6d528cf/attributes","cluster-id":"9b78066306349c95","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T12:37:25.055172Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:37:25.055713Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:37:25.058926Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T12:37:25.059121Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T12:37:25.059355Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9b78066306349c95","local-member-id":"9ff7ddd6e6d528cf","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:37:25.059793Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:37:25.060079Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:37:25.061757Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:37:25.066036Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.244.98:2379"}
	{"level":"info","ts":"2024-10-28T12:37:25.066377Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:37:25.067646Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T12:37:39.319261Z","caller":"traceutil/trace.go:171","msg":"trace[2019660784] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"110.373943ms","start":"2024-10-28T12:37:39.208864Z","end":"2024-10-28T12:37:39.319238Z","steps":["trace[2019660784] 'process raft request'  (duration: 109.765438ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:37:44.149664Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.822975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-071500\" ","response":"range_response_count:1 size:4487"}
	{"level":"info","ts":"2024-10-28T12:37:44.149771Z","caller":"traceutil/trace.go:171","msg":"trace[1360376525] range","detail":"{range_begin:/registry/minions/multinode-071500; range_end:; response_count:1; response_revision:392; }","duration":"214.010677ms","start":"2024-10-28T12:37:43.935737Z","end":"2024-10-28T12:37:44.149747Z","steps":["trace[1360376525] 'range keys from in-memory index tree'  (duration: 213.725475ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:38:24 up 3 min,  0 users,  load average: 0.66, 0.35, 0.13
	Linux multinode-071500 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [65e0fb44dec2] <==
	I1028 12:37:45.150634       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1028 12:37:45.151083       1 main.go:139] hostIP = 172.27.244.98
	podIP = 172.27.244.98
	I1028 12:37:45.151435       1 main.go:148] setting mtu 1500 for CNI 
	I1028 12:37:45.151636       1 main.go:178] kindnetd IP family: "ipv4"
	I1028 12:37:45.151781       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1028 12:37:46.148296       1 main.go:238] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	I1028 12:37:56.157729       1 main.go:296] Handling node with IPs: map[172.27.244.98:{}]
	I1028 12:37:56.158083       1 main.go:300] handling current node
	I1028 12:38:06.150178       1 main.go:296] Handling node with IPs: map[172.27.244.98:{}]
	I1028 12:38:06.150240       1 main.go:300] handling current node
	I1028 12:38:16.160005       1 main.go:296] Handling node with IPs: map[172.27.244.98:{}]
	I1028 12:38:16.160193       1 main.go:300] handling current node
	
	
	==> kube-apiserver [194625c1d055] <==
	I1028 12:37:27.409594       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1028 12:37:27.409958       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1028 12:37:27.409999       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1028 12:37:27.416609       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E1028 12:37:27.416672       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1028 12:37:27.417145       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1028 12:37:27.418605       1 policy_source.go:224] refreshing policies
	I1028 12:37:27.456538       1 controller.go:615] quota admission added evaluator for: namespaces
	E1028 12:37:27.518964       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1028 12:37:27.626496       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1028 12:37:28.218679       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1028 12:37:28.228344       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1028 12:37:28.228380       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1028 12:37:29.470521       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1028 12:37:29.570087       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1028 12:37:29.744451       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1028 12:37:29.786026       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.244.98]
	I1028 12:37:29.787419       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 12:37:29.814014       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1028 12:37:30.298577       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 12:37:30.552293       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 12:37:30.605996       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1028 12:37:30.658631       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 12:37:35.697225       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1028 12:37:35.998791       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [dd6a29921aeb] <==
	I1028 12:37:35.248080       1 shared_informer.go:320] Caches are synced for disruption
	I1028 12:37:35.251696       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 12:37:35.261782       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 12:37:35.299486       1 shared_informer.go:320] Caches are synced for stateful set
	I1028 12:37:35.726720       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 12:37:35.746311       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 12:37:35.746354       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1028 12:37:35.949740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-071500"
	I1028 12:37:36.330028       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="319.843045ms"
	I1028 12:37:36.354703       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="24.458863ms"
	I1028 12:37:36.355505       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="106.401µs"
	I1028 12:37:58.618694       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-071500"
	I1028 12:37:58.635443       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-071500"
	I1028 12:37:58.658482       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="1.451007ms"
	I1028 12:37:58.662895       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="37.3µs"
	I1028 12:37:58.694729       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="314.501µs"
	I1028 12:37:58.733963       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="180.401µs"
	I1028 12:38:00.049108       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1028 12:38:00.331093       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.2µs"
	I1028 12:38:00.394237       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.401µs"
	I1028 12:38:01.394724       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="31.600616ms"
	I1028 12:38:01.395429       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="115.901µs"
	I1028 12:38:01.403025       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-071500"
	I1028 12:38:01.490155       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="41.97842ms"
	I1028 12:38:01.490306       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.7µs"
	
	
	==> kube-proxy [7869c78851ad] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 12:37:37.400456       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 12:37:37.425871       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.27.244.98"]
	E1028 12:37:37.426380       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 12:37:37.529412       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 12:37:37.529546       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 12:37:37.529582       1 server_linux.go:169] "Using iptables Proxier"
	I1028 12:37:37.534266       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 12:37:37.535064       1 server.go:483] "Version info" version="v1.31.2"
	I1028 12:37:37.535866       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:37:37.538339       1 config.go:199] "Starting service config controller"
	I1028 12:37:37.538550       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 12:37:37.538918       1 config.go:105] "Starting endpoint slice config controller"
	I1028 12:37:37.539135       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 12:37:37.542601       1 config.go:328] "Starting node config controller"
	I1028 12:37:37.542941       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 12:37:37.559096       1 shared_informer.go:320] Caches are synced for node config
	I1028 12:37:37.639490       1 shared_informer.go:320] Caches are synced for service config
	I1028 12:37:37.639595       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2f85a9624857] <==
	W1028 12:37:28.496466       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 12:37:28.498488       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 12:37:28.522186       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 12:37:28.522543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.552203       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 12:37:28.552241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.580607       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 12:37:28.580729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.680181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 12:37:28.682510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.711501       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 12:37:28.711878       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.712889       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 12:37:28.713975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.755348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 12:37:28.755415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.821039       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 12:37:28.821478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.906662       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 12:37:28.907605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.932186       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 12:37:28.932240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:29.010168       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 12:37:29.010492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1028 12:37:31.101434       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 12:37:31 multinode-071500 kubelet[2247]: I1028 12:37:31.865016    2247 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-071500" podStartSLOduration=1.864994627 podStartE2EDuration="1.864994627s" podCreationTimestamp="2024-10-28 12:37:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-28 12:37:31.831768474 +0000 UTC m=+1.391579754" watchObservedRunningTime="2024-10-28 12:37:31.864994627 +0000 UTC m=+1.424805807"
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.148768    2247 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.151399    2247 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.791558    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe651213-d8ad-43ae-b151-dd8ad6cd1e8d-xtables-lock\") pod \"kube-proxy-tgw89\" (UID: \"fe651213-d8ad-43ae-b151-dd8ad6cd1e8d\") " pod="kube-system/kube-proxy-tgw89"
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.791798    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fe651213-d8ad-43ae-b151-dd8ad6cd1e8d-kube-proxy\") pod \"kube-proxy-tgw89\" (UID: \"fe651213-d8ad-43ae-b151-dd8ad6cd1e8d\") " pod="kube-system/kube-proxy-tgw89"
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.791926    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9151b032-96d2-40e4-b4e6-6bac4ccb5180-lib-modules\") pod \"kindnet-c7z7c\" (UID: \"9151b032-96d2-40e4-b4e6-6bac4ccb5180\") " pod="kube-system/kindnet-c7z7c"
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.792006    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe651213-d8ad-43ae-b151-dd8ad6cd1e8d-lib-modules\") pod \"kube-proxy-tgw89\" (UID: \"fe651213-d8ad-43ae-b151-dd8ad6cd1e8d\") " pod="kube-system/kube-proxy-tgw89"
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.792106    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9151b032-96d2-40e4-b4e6-6bac4ccb5180-cni-cfg\") pod \"kindnet-c7z7c\" (UID: \"9151b032-96d2-40e4-b4e6-6bac4ccb5180\") " pod="kube-system/kindnet-c7z7c"
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.792197    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9151b032-96d2-40e4-b4e6-6bac4ccb5180-xtables-lock\") pod \"kindnet-c7z7c\" (UID: \"9151b032-96d2-40e4-b4e6-6bac4ccb5180\") " pod="kube-system/kindnet-c7z7c"
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.792291    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdnsz\" (UniqueName: \"kubernetes.io/projected/9151b032-96d2-40e4-b4e6-6bac4ccb5180-kube-api-access-cdnsz\") pod \"kindnet-c7z7c\" (UID: \"9151b032-96d2-40e4-b4e6-6bac4ccb5180\") " pod="kube-system/kindnet-c7z7c"
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.792396    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2694\" (UniqueName: \"kubernetes.io/projected/fe651213-d8ad-43ae-b151-dd8ad6cd1e8d-kube-api-access-s2694\") pod \"kube-proxy-tgw89\" (UID: \"fe651213-d8ad-43ae-b151-dd8ad6cd1e8d\") " pod="kube-system/kube-proxy-tgw89"
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.938541    2247 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 28 12:37:37 multinode-071500 kubelet[2247]: I1028 12:37:37.394913    2247 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8092d681a7fcf95fd4abec34e0c8aae511804c8a70366790b3a66de9aba99cd7"
	Oct 28 12:37:37 multinode-071500 kubelet[2247]: I1028 12:37:37.505463    2247 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tgw89" podStartSLOduration=2.505429779 podStartE2EDuration="2.505429779s" podCreationTimestamp="2024-10-28 12:37:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-28 12:37:37.460960121 +0000 UTC m=+7.020771401" watchObservedRunningTime="2024-10-28 12:37:37.505429779 +0000 UTC m=+7.065241059"
	Oct 28 12:37:58 multinode-071500 kubelet[2247]: I1028 12:37:58.596806    2247 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Oct 28 12:37:58 multinode-071500 kubelet[2247]: I1028 12:37:58.652971    2247 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-c7z7c" podStartSLOduration=17.178162282 podStartE2EDuration="23.652951227s" podCreationTimestamp="2024-10-28 12:37:35 +0000 UTC" firstStartedPulling="2024-10-28 12:37:37.405135046 +0000 UTC m=+6.964946226" lastFinishedPulling="2024-10-28 12:37:43.879923991 +0000 UTC m=+13.439735171" observedRunningTime="2024-10-28 12:37:45.851499758 +0000 UTC m=+15.411311038" watchObservedRunningTime="2024-10-28 12:37:58.652951227 +0000 UTC m=+28.212762407"
	Oct 28 12:37:58 multinode-071500 kubelet[2247]: I1028 12:37:58.796463    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c6bd789-f3d5-4c3c-9f69-6e5dbe1db852-config-volume\") pod \"coredns-7c65d6cfc9-w5gxr\" (UID: \"2c6bd789-f3d5-4c3c-9f69-6e5dbe1db852\") " pod="kube-system/coredns-7c65d6cfc9-w5gxr"
	Oct 28 12:37:58 multinode-071500 kubelet[2247]: I1028 12:37:58.796621    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72f8f3d0-e08c-44f1-8f74-6f5685c5bf75-config-volume\") pod \"coredns-7c65d6cfc9-j8vdn\" (UID: \"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75\") " pod="kube-system/coredns-7c65d6cfc9-j8vdn"
	Oct 28 12:37:58 multinode-071500 kubelet[2247]: I1028 12:37:58.796652    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svmhf\" (UniqueName: \"kubernetes.io/projected/72f8f3d0-e08c-44f1-8f74-6f5685c5bf75-kube-api-access-svmhf\") pod \"coredns-7c65d6cfc9-j8vdn\" (UID: \"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75\") " pod="kube-system/coredns-7c65d6cfc9-j8vdn"
	Oct 28 12:37:58 multinode-071500 kubelet[2247]: I1028 12:37:58.796689    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7sk5\" (UniqueName: \"kubernetes.io/projected/2c6bd789-f3d5-4c3c-9f69-6e5dbe1db852-kube-api-access-j7sk5\") pod \"coredns-7c65d6cfc9-w5gxr\" (UID: \"2c6bd789-f3d5-4c3c-9f69-6e5dbe1db852\") " pod="kube-system/coredns-7c65d6cfc9-w5gxr"
	Oct 28 12:37:58 multinode-071500 kubelet[2247]: I1028 12:37:58.796720    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3041ff50-d6af-4c68-803f-78a36f22c000-tmp\") pod \"storage-provisioner\" (UID: \"3041ff50-d6af-4c68-803f-78a36f22c000\") " pod="kube-system/storage-provisioner"
	Oct 28 12:37:58 multinode-071500 kubelet[2247]: I1028 12:37:58.796744    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqrsx\" (UniqueName: \"kubernetes.io/projected/3041ff50-d6af-4c68-803f-78a36f22c000-kube-api-access-hqrsx\") pod \"storage-provisioner\" (UID: \"3041ff50-d6af-4c68-803f-78a36f22c000\") " pod="kube-system/storage-provisioner"
	Oct 28 12:38:00 multinode-071500 kubelet[2247]: I1028 12:38:00.391928    2247 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-j8vdn" podStartSLOduration=24.391903218 podStartE2EDuration="24.391903218s" podCreationTimestamp="2024-10-28 12:37:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-28 12:38:00.390000399 +0000 UTC m=+29.949811679" watchObservedRunningTime="2024-10-28 12:38:00.391903218 +0000 UTC m=+29.951714498"
	Oct 28 12:38:00 multinode-071500 kubelet[2247]: I1028 12:38:00.392210    2247 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-w5gxr" podStartSLOduration=24.392199321 podStartE2EDuration="24.392199321s" podCreationTimestamp="2024-10-28 12:37:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-28 12:38:00.338172679 +0000 UTC m=+29.897983959" watchObservedRunningTime="2024-10-28 12:38:00.392199321 +0000 UTC m=+29.952010601"
	Oct 28 12:38:01 multinode-071500 kubelet[2247]: I1028 12:38:01.446619    2247 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.44659458 podStartE2EDuration="17.44659458s" podCreationTimestamp="2024-10-28 12:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-28 12:38:01.415678471 +0000 UTC m=+30.975489651" watchObservedRunningTime="2024-10-28 12:38:01.44659458 +0000 UTC m=+31.006405760"
	
	
	==> storage-provisioner [68b45017566f] <==
	I1028 12:38:00.241425       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 12:38:00.376991       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 12:38:00.377366       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 12:38:00.399349       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 12:38:00.399519       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-071500_88ce8c2c-61c9-4092-b5bf-02d51fbf660c!
	I1028 12:38:00.401349       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1ccdd4c9-5c6b-4ada-bb8b-34eeb18cb932", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-071500_88ce8c2c-61c9-4092-b5bf-02d51fbf660c became leader
	I1028 12:38:00.500704       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-071500_88ce8c2c-61c9-4092-b5bf-02d51fbf660c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-071500 -n multinode-071500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-071500 -n multinode-071500: (12.9435563s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-071500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (262.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (56.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-071500 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-071500 node delete m03: exit status 80 (7.7935989s)

                                                
                                                
-- stdout --
	* Deleting node m03 from cluster multinode-071500
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_DELETE: deleting node: retrieve node: Could not find node m03
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_0750fd9891480dd9ca1e47b9cdea735b19eeab48_0.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-windows-amd64.exe -p multinode-071500 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-071500 status --alsologtostderr
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-071500 status --alsologtostderr: (12.7664999s)
multinode_test.go:428: status says both hosts are not running: args "out/minikube-windows-amd64.exe -p multinode-071500 status --alsologtostderr": multinode-071500
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode_test.go:432: status says both kubelets are not running: args "out/minikube-windows-amd64.exe -p multinode-071500 status --alsologtostderr": multinode-071500
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
multinode_test.go:449: expected 2 nodes Ready status to be True, got 
-- stdout --
	' True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500: (12.8649269s)
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-071500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-071500 logs -n 25: (8.9713596s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-071500 -- rollout       | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | status deployment/busybox            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o   | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o   | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o   | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o   | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o   | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o   | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o   | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o   | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o   | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:29 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o   | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:30 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o   | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:30 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o   | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:30 UTC |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- exec          | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:30 UTC |                     |
	|         | -- nslookup kubernetes.io            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- exec          | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:30 UTC |                     |
	|         | -- nslookup kubernetes.default       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500                  | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:30 UTC |                     |
	|         | -- exec  -- nslookup                 |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |                   |         |                     |                     |
	| kubectl | -p multinode-071500 -- get pods -o   | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:30 UTC |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| node    | add -p multinode-071500 -v 3         | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:31 UTC |                     |
	|         | --alsologtostderr                    |                  |                   |         |                     |                     |
	| node    | multinode-071500 node stop m03       | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:32 UTC |                     |
	| node    | multinode-071500 node start          | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:32 UTC |                     |
	|         | m03 -v=7 --alsologtostderr           |                  |                   |         |                     |                     |
	| node    | list -p multinode-071500             | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:34 UTC |                     |
	| stop    | -p multinode-071500                  | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:34 UTC | 28 Oct 24 12:34 UTC |
	| start   | -p multinode-071500                  | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:34 UTC | 28 Oct 24 12:38 UTC |
	|         | --wait=true -v=8                     |                  |                   |         |                     |                     |
	|         | --alsologtostderr                    |                  |                   |         |                     |                     |
	| node    | list -p multinode-071500             | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:38 UTC |                     |
	| node    | multinode-071500 node delete         | multinode-071500 | minikube6\jenkins | v1.34.0 | 28 Oct 24 12:38 UTC |                     |
	|         | m03                                  |                  |                   |         |                     |                     |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:34:56
	Running on machine: minikube6
	Binary: Built with gc go1.23.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:34:56.666427    5536 out.go:345] Setting OutFile to fd 1984 ...
	I1028 12:34:56.751121    5536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:34:56.751121    5536 out.go:358] Setting ErrFile to fd 1492...
	I1028 12:34:56.751121    5536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:34:56.775384    5536 out.go:352] Setting JSON to false
	I1028 12:34:56.779507    5536 start.go:129] hostinfo: {"hostname":"minikube6","uptime":166721,"bootTime":1729952174,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W1028 12:34:56.779507    5536 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 12:34:56.784433    5536 out.go:177] * [multinode-071500] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1028 12:34:56.787119    5536 notify.go:220] Checking for updates...
	I1028 12:34:56.789366    5536 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 12:34:56.791420    5536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:34:56.794380    5536 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I1028 12:34:56.797545    5536 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:34:56.800105    5536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:34:56.803865    5536 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:34:56.804885    5536 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:35:02.400781    5536 out.go:177] * Using the hyperv driver based on existing profile
	I1028 12:35:02.404951    5536 start.go:297] selected driver: hyperv
	I1028 12:35:02.404951    5536 start.go:901] validating driver "hyperv" against &{Name:multinode-071500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.2 ClusterName:multinode-071500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.249.25 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:35:02.405123    5536 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:35:02.454829    5536 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:35:02.454829    5536 cni.go:84] Creating CNI manager for ""
	I1028 12:35:02.454829    5536 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 12:35:02.454829    5536 start.go:340] cluster config:
	{Name:multinode-071500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-071500 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.249.25 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:35:02.455735    5536 iso.go:125] acquiring lock: {Name:mk92685f18db3b9b8a4c28d2a00efac35439147d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:35:02.463217    5536 out.go:177] * Starting "multinode-071500" primary control-plane node in "multinode-071500" cluster
	I1028 12:35:02.465576    5536 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 12:35:02.466508    5536 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1028 12:35:02.466508    5536 cache.go:56] Caching tarball of preloaded images
	I1028 12:35:02.466508    5536 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1028 12:35:02.466508    5536 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 12:35:02.466508    5536 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\config.json ...
	I1028 12:35:02.470011    5536 start.go:360] acquireMachinesLock for multinode-071500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:35:02.470011    5536 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-071500"
	I1028 12:35:02.470011    5536 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:35:02.470011    5536 fix.go:54] fixHost starting: 
	I1028 12:35:02.471012    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:05.135149    5536 main.go:141] libmachine: [stdout =====>] : Off
	
	I1028 12:35:05.135149    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:05.135149    5536 fix.go:112] recreateIfNeeded on multinode-071500: state=Stopped err=<nil>
	W1028 12:35:05.135149    5536 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:35:05.138703    5536 out.go:177] * Restarting existing hyperv VM for "multinode-071500" ...
	I1028 12:35:05.142947    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-071500
	I1028 12:35:08.237568    5536 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:35:08.237568    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:08.237568    5536 main.go:141] libmachine: Waiting for host to start...
	I1028 12:35:08.237568    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:10.525404    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:10.525404    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:10.525404    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:13.053429    5536 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:35:13.053845    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:14.054891    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:16.320644    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:16.320772    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:16.320772    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:18.910405    5536 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:35:18.910405    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:19.910671    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:22.175501    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:22.175501    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:22.175501    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:24.757037    5536 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:35:24.757037    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:25.757313    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:28.049865    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:28.049865    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:28.049865    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:30.652316    5536 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:35:30.652316    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:31.652582    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:33.901681    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:33.901681    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:33.901681    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:36.580198    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:35:36.580198    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:36.584163    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:38.786994    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:38.788222    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:38.788222    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:41.393216    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:35:41.393216    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:41.393579    5536 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\config.json ...
	I1028 12:35:41.397564    5536 machine.go:93] provisionDockerMachine start ...
	I1028 12:35:41.397686    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:43.548993    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:43.548993    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:43.548993    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:46.112252    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:35:46.113009    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:46.119302    5536 main.go:141] libmachine: Using SSH client type: native
	I1028 12:35:46.119550    5536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.244.98 22 <nil> <nil>}
	I1028 12:35:46.120145    5536 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:35:46.250618    5536 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:35:46.250618    5536 buildroot.go:166] provisioning hostname "multinode-071500"
	I1028 12:35:46.250618    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:48.438852    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:48.438930    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:48.438930    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:51.045692    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:35:51.045692    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:51.054209    5536 main.go:141] libmachine: Using SSH client type: native
	I1028 12:35:51.054949    5536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.244.98 22 <nil> <nil>}
	I1028 12:35:51.054949    5536 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-071500 && echo "multinode-071500" | sudo tee /etc/hostname
	I1028 12:35:51.208291    5536 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-071500
	
	I1028 12:35:51.208291    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:53.435254    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:53.436145    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:53.436339    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:35:56.122595    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:35:56.122595    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:56.128806    5536 main.go:141] libmachine: Using SSH client type: native
	I1028 12:35:56.129413    5536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.244.98 22 <nil> <nil>}
	I1028 12:35:56.129413    5536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-071500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-071500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-071500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:35:56.268107    5536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:35:56.268257    5536 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I1028 12:35:56.268335    5536 buildroot.go:174] setting up certificates
	I1028 12:35:56.268335    5536 provision.go:84] configureAuth start
	I1028 12:35:56.268456    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:35:58.486808    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:35:58.486808    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:35:58.486962    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:01.098358    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:01.098358    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:01.098358    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:03.277569    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:03.277569    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:03.277569    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:05.931981    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:05.932036    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:05.932036    5536 provision.go:143] copyHostCerts
	I1028 12:36:05.932036    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I1028 12:36:05.932570    5536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I1028 12:36:05.932570    5536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I1028 12:36:05.932830    5536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1028 12:36:05.934422    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I1028 12:36:05.934422    5536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I1028 12:36:05.934422    5536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I1028 12:36:05.935237    5536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1028 12:36:05.936018    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I1028 12:36:05.936554    5536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I1028 12:36:05.936658    5536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I1028 12:36:05.936871    5536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I1028 12:36:05.938073    5536 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-071500 san=[127.0.0.1 172.27.244.98 localhost minikube multinode-071500]
	I1028 12:36:06.130120    5536 provision.go:177] copyRemoteCerts
	I1028 12:36:06.141421    5536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:36:06.141421    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:08.361691    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:08.361878    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:08.361878    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:10.952224    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:10.952642    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:10.953177    5536 sshutil.go:53] new ssh client: &{IP:172.27.244.98 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:36:11.056927    5536 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.915451s)
	I1028 12:36:11.056927    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1028 12:36:11.057231    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:36:11.110456    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1028 12:36:11.110720    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1028 12:36:11.157782    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1028 12:36:11.158395    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 12:36:11.204886    5536 provision.go:87] duration metric: took 14.9363823s to configureAuth
	I1028 12:36:11.204886    5536 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:36:11.205924    5536 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:36:11.205924    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:13.339276    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:13.339790    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:13.339790    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:15.917727    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:15.917727    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:15.924302    5536 main.go:141] libmachine: Using SSH client type: native
	I1028 12:36:15.924302    5536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.244.98 22 <nil> <nil>}
	I1028 12:36:15.924302    5536 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 12:36:16.055455    5536 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 12:36:16.055597    5536 buildroot.go:70] root file system type: tmpfs
	I1028 12:36:16.055844    5536 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 12:36:16.055928    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:18.243905    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:18.243905    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:18.244267    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:20.827810    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:20.827873    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:20.833215    5536 main.go:141] libmachine: Using SSH client type: native
	I1028 12:36:20.833353    5536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.244.98 22 <nil> <nil>}
	I1028 12:36:20.833876    5536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 12:36:20.999027    5536 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 12:36:20.999027    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:23.132072    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:23.132072    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:23.132280    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:25.703137    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:25.703390    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:25.708678    5536 main.go:141] libmachine: Using SSH client type: native
	I1028 12:36:25.709204    5536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.244.98 22 <nil> <nil>}
	I1028 12:36:25.709204    5536 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 12:36:27.930469    5536 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1028 12:36:27.930469    5536 machine.go:96] duration metric: took 46.5323796s to provisionDockerMachine
	I1028 12:36:27.930469    5536 start.go:293] postStartSetup for "multinode-071500" (driver="hyperv")
	I1028 12:36:27.931048    5536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:36:27.943014    5536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:36:27.943592    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:30.130085    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:30.130849    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:30.130988    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:32.778603    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:32.778603    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:32.779633    5536 sshutil.go:53] new ssh client: &{IP:172.27.244.98 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:36:32.889212    5536 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9455649s)
	I1028 12:36:32.900128    5536 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:36:32.908145    5536 command_runner.go:130] > NAME=Buildroot
	I1028 12:36:32.908145    5536 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1028 12:36:32.908145    5536 command_runner.go:130] > ID=buildroot
	I1028 12:36:32.908145    5536 command_runner.go:130] > VERSION_ID=2023.02.9
	I1028 12:36:32.908145    5536 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1028 12:36:32.908145    5536 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:36:32.908145    5536 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I1028 12:36:32.908880    5536 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I1028 12:36:32.909635    5536 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> 96082.pem in /etc/ssl/certs
	I1028 12:36:32.909635    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /etc/ssl/certs/96082.pem
	I1028 12:36:32.922670    5536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:36:32.940830    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /etc/ssl/certs/96082.pem (1708 bytes)
	I1028 12:36:32.988594    5536 start.go:296] duration metric: took 5.058067s for postStartSetup
	I1028 12:36:32.988876    5536 fix.go:56] duration metric: took 1m30.5176985s for fixHost
	I1028 12:36:32.988928    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:35.200634    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:35.201363    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:35.201437    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:37.822660    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:37.822660    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:37.828530    5536 main.go:141] libmachine: Using SSH client type: native
	I1028 12:36:37.829340    5536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.244.98 22 <nil> <nil>}
	I1028 12:36:37.829340    5536 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:36:37.958514    5536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730118997.972644227
	
	I1028 12:36:37.958514    5536 fix.go:216] guest clock: 1730118997.972644227
	I1028 12:36:37.958514    5536 fix.go:229] Guest: 2024-10-28 12:36:37.972644227 +0000 UTC Remote: 2024-10-28 12:36:32.9888762 +0000 UTC m=+96.419455301 (delta=4.983768027s)
	I1028 12:36:37.959137    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:40.183180    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:40.183180    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:40.183180    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:42.794436    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:42.794560    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:42.800818    5536 main.go:141] libmachine: Using SSH client type: native
	I1028 12:36:42.800818    5536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.244.98 22 <nil> <nil>}
	I1028 12:36:42.801399    5536 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1730118997
	I1028 12:36:42.943713    5536 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 28 12:36:37 UTC 2024
	
	I1028 12:36:42.943713    5536 fix.go:236] clock set: Mon Oct 28 12:36:37 UTC 2024
	 (err=<nil>)
	I1028 12:36:42.943713    5536 start.go:83] releasing machines lock for "multinode-071500", held for 1m40.472567s
	I1028 12:36:42.943713    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:45.173941    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:45.173941    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:45.174463    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:47.783560    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:47.784148    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:47.788033    5536 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1028 12:36:47.788576    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:47.802257    5536 ssh_runner.go:195] Run: cat /version.json
	I1028 12:36:47.802257    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:36:50.088629    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:50.089218    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:50.089218    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:50.089218    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:36:50.089774    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:50.089774    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:36:52.804920    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:52.805632    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:52.805632    5536 sshutil.go:53] new ssh client: &{IP:172.27.244.98 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:36:52.832441    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:36:52.832441    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:36:52.833221    5536 sshutil.go:53] new ssh client: &{IP:172.27.244.98 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:36:52.903763    5536 command_runner.go:130] > {"iso_version": "v1.34.0-1729002252-19806", "kicbase_version": "v0.0.45-1728382586-19774", "minikube_version": "v1.34.0", "commit": "0b046a85be42f4631dd3453091a30d7fc1803a43"}
	I1028 12:36:52.904316    5536 ssh_runner.go:235] Completed: cat /version.json: (5.1020018s)
	I1028 12:36:52.916521    5536 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I1028 12:36:52.916957    5536 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1288657s)
	W1028 12:36:52.916957    5536 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1028 12:36:52.917231    5536 ssh_runner.go:195] Run: systemctl --version
	I1028 12:36:52.926470    5536 command_runner.go:130] > systemd 252 (252)
	I1028 12:36:52.926470    5536 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1028 12:36:52.938643    5536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 12:36:52.947642    5536 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1028 12:36:52.948416    5536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:36:52.959310    5536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:36:52.989431    5536 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1028 12:36:52.989710    5536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:36:52.989933    5536 start.go:495] detecting cgroup driver to use...
	I1028 12:36:52.990513    5536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:36:53.026237    5536 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	W1028 12:36:53.030289    5536 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1028 12:36:53.030471    5536 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1028 12:36:53.038302    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1028 12:36:53.075502    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 12:36:53.098022    5536 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 12:36:53.109780    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 12:36:53.141660    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 12:36:53.173563    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 12:36:53.205980    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 12:36:53.240068    5536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:36:53.274173    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 12:36:53.306396    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 12:36:53.341068    5536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 12:36:53.373539    5536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:36:53.397892    5536 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:36:53.398542    5536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:36:53.410247    5536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:36:53.444742    5536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:36:53.473148    5536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:36:53.670923    5536 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 12:36:53.703561    5536 start.go:495] detecting cgroup driver to use...
	I1028 12:36:53.716908    5536 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 12:36:53.747325    5536 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1028 12:36:53.747325    5536 command_runner.go:130] > [Unit]
	I1028 12:36:53.747504    5536 command_runner.go:130] > Description=Docker Application Container Engine
	I1028 12:36:53.747548    5536 command_runner.go:130] > Documentation=https://docs.docker.com
	I1028 12:36:53.747583    5536 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1028 12:36:53.747583    5536 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1028 12:36:53.747615    5536 command_runner.go:130] > StartLimitBurst=3
	I1028 12:36:53.747615    5536 command_runner.go:130] > StartLimitIntervalSec=60
	I1028 12:36:53.747615    5536 command_runner.go:130] > [Service]
	I1028 12:36:53.747615    5536 command_runner.go:130] > Type=notify
	I1028 12:36:53.747665    5536 command_runner.go:130] > Restart=on-failure
	I1028 12:36:53.747665    5536 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1028 12:36:53.747665    5536 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1028 12:36:53.747665    5536 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1028 12:36:53.747665    5536 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1028 12:36:53.747665    5536 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1028 12:36:53.747665    5536 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1028 12:36:53.747665    5536 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1028 12:36:53.747665    5536 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1028 12:36:53.747665    5536 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1028 12:36:53.747665    5536 command_runner.go:130] > ExecStart=
	I1028 12:36:53.747665    5536 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1028 12:36:53.747665    5536 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1028 12:36:53.747665    5536 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1028 12:36:53.747665    5536 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1028 12:36:53.747665    5536 command_runner.go:130] > LimitNOFILE=infinity
	I1028 12:36:53.747665    5536 command_runner.go:130] > LimitNPROC=infinity
	I1028 12:36:53.747665    5536 command_runner.go:130] > LimitCORE=infinity
	I1028 12:36:53.747665    5536 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1028 12:36:53.747665    5536 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1028 12:36:53.747665    5536 command_runner.go:130] > TasksMax=infinity
	I1028 12:36:53.747665    5536 command_runner.go:130] > TimeoutStartSec=0
	I1028 12:36:53.747665    5536 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1028 12:36:53.747665    5536 command_runner.go:130] > Delegate=yes
	I1028 12:36:53.747665    5536 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1028 12:36:53.747665    5536 command_runner.go:130] > KillMode=process
	I1028 12:36:53.747665    5536 command_runner.go:130] > [Install]
	I1028 12:36:53.747665    5536 command_runner.go:130] > WantedBy=multi-user.target
	I1028 12:36:53.760837    5536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:36:53.800047    5536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:36:53.844682    5536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:36:53.880744    5536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 12:36:53.915134    5536 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1028 12:36:53.994892    5536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 12:36:54.019684    5536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:36:54.056764    5536 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1028 12:36:54.067116    5536 ssh_runner.go:195] Run: which cri-dockerd
	I1028 12:36:54.073667    5536 command_runner.go:130] > /usr/bin/cri-dockerd
	I1028 12:36:54.084505    5536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 12:36:54.104625    5536 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1028 12:36:54.149881    5536 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 12:36:54.365991    5536 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 12:36:54.567365    5536 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 12:36:54.567651    5536 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 12:36:54.611059    5536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:36:54.841683    5536 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 12:36:57.431947    5536 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5887535s)
	I1028 12:36:57.445563    5536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 12:36:57.485901    5536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 12:36:57.519746    5536 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 12:36:57.725369    5536 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 12:36:57.921411    5536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:36:58.128297    5536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 12:36:58.168705    5536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 12:36:58.206264    5536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:36:58.416540    5536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 12:36:58.531602    5536 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 12:36:58.543182    5536 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 12:36:58.551838    5536 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1028 12:36:58.551838    5536 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1028 12:36:58.551838    5536 command_runner.go:130] > Device: 0,22	Inode: 857         Links: 1
	I1028 12:36:58.551925    5536 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1028 12:36:58.551925    5536 command_runner.go:130] > Access: 2024-10-28 12:36:58.456643614 +0000
	I1028 12:36:58.551925    5536 command_runner.go:130] > Modify: 2024-10-28 12:36:58.456643614 +0000
	I1028 12:36:58.551925    5536 command_runner.go:130] > Change: 2024-10-28 12:36:58.461643630 +0000
	I1028 12:36:58.551983    5536 command_runner.go:130] >  Birth: -
	I1028 12:36:58.551983    5536 start.go:563] Will wait 60s for crictl version
	I1028 12:36:58.563774    5536 ssh_runner.go:195] Run: which crictl
	I1028 12:36:58.570588    5536 command_runner.go:130] > /usr/bin/crictl
	I1028 12:36:58.581313    5536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:36:58.636732    5536 command_runner.go:130] > Version:  0.1.0
	I1028 12:36:58.636732    5536 command_runner.go:130] > RuntimeName:  docker
	I1028 12:36:58.636732    5536 command_runner.go:130] > RuntimeVersion:  27.3.1
	I1028 12:36:58.636851    5536 command_runner.go:130] > RuntimeApiVersion:  v1
	I1028 12:36:58.636851    5536 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1028 12:36:58.646966    5536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 12:36:58.681716    5536 command_runner.go:130] > 27.3.1
	I1028 12:36:58.694333    5536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 12:36:58.734395    5536 command_runner.go:130] > 27.3.1
	I1028 12:36:58.740974    5536 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1028 12:36:58.741085    5536 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1028 12:36:58.744987    5536 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1028 12:36:58.744987    5536 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1028 12:36:58.744987    5536 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1028 12:36:58.744987    5536 ip.go:211] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:23:7f:d2 Flags:up|broadcast|multicast|running}
	I1028 12:36:58.747683    5536 ip.go:214] interface addr: fe80::866e:dfb9:193a:741e/64
	I1028 12:36:58.748635    5536 ip.go:214] interface addr: 172.27.240.1/20
	I1028 12:36:58.759086    5536 ssh_runner.go:195] Run: grep 172.27.240.1	host.minikube.internal$ /etc/hosts
	I1028 12:36:58.766328    5536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:36:58.790382    5536 kubeadm.go:883] updating cluster {Name:multinode-071500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.2 ClusterName:multinode-071500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.244.98 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:36:58.790506    5536 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 12:36:58.800555    5536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 12:36:58.825491    5536 docker.go:689] Got preloaded images: 
	I1028 12:36:58.825491    5536 docker.go:695] registry.k8s.io/kube-apiserver:v1.31.2 wasn't preloaded
	I1028 12:36:58.837616    5536 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 12:36:58.856972    5536 command_runner.go:139] > {"Repositories":{}}
	I1028 12:36:58.867260    5536 ssh_runner.go:195] Run: which lz4
	I1028 12:36:58.873529    5536 command_runner.go:130] > /usr/bin/lz4
	I1028 12:36:58.873529    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1028 12:36:58.883881    5536 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:36:58.891179    5536 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:36:58.891179    5536 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:36:58.891386    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (343199686 bytes)
	I1028 12:37:01.194326    5536 docker.go:653] duration metric: took 2.3207702s to copy over tarball
	I1028 12:37:01.204238    5536 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:37:09.489265    5536 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.2849333s)
	I1028 12:37:09.489411    5536 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:37:09.553645    5536 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 12:37:09.572370    5536 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.3":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.15-0":"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a":"sha256:2e96e5913fc06e3d26915af3d0f
2ca5048cc4b6327e661e80da792cbf8d8d9d4"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.31.2":"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0":"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.31.2":"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752":"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.31.2":"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe":"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd
10d47de7a0c2d38"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.31.2":"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282":"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.10":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136"}}}
	I1028 12:37:09.572370    5536 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1028 12:37:10.755487    5536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:37:10.971723    5536 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 12:37:13.713320    5536 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7415658s)
	I1028 12:37:13.723662    5536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 12:37:13.749233    5536 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:37:13.749233    5536 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:37:13.749233    5536 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:37:13.749233    5536 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:37:13.749233    5536 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:37:13.749233    5536 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I1028 12:37:13.749233    5536 command_runner.go:130] > registry.k8s.io/pause:3.10
	I1028 12:37:13.749233    5536 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:37:13.749446    5536 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 12:37:13.749446    5536 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:37:13.749540    5536 kubeadm.go:934] updating node { 172.27.244.98 8443 v1.31.2 docker true true} ...
	I1028 12:37:13.749874    5536 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-071500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.244.98
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-071500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:37:13.759276    5536 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1028 12:37:13.827758    5536 command_runner.go:130] > cgroupfs
	I1028 12:37:13.827882    5536 cni.go:84] Creating CNI manager for ""
	I1028 12:37:13.827882    5536 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 12:37:13.827882    5536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:37:13.828102    5536 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.244.98 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-071500 NodeName:multinode-071500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.244.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.244.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:37:13.828176    5536 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.244.98
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-071500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.27.244.98"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.244.98"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:37:13.839685    5536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:37:13.858323    5536 command_runner.go:130] > kubeadm
	I1028 12:37:13.858706    5536 command_runner.go:130] > kubectl
	I1028 12:37:13.858706    5536 command_runner.go:130] > kubelet
	I1028 12:37:13.858899    5536 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:37:13.871762    5536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:37:13.888986    5536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1028 12:37:13.921445    5536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:37:13.953827    5536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1028 12:37:14.000462    5536 ssh_runner.go:195] Run: grep 172.27.244.98	control-plane.minikube.internal$ /etc/hosts
	I1028 12:37:14.007451    5536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.244.98	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:37:14.042964    5536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:37:14.253260    5536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:37:14.286713    5536 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500 for IP: 172.27.244.98
	I1028 12:37:14.286788    5536 certs.go:194] generating shared ca certs ...
	I1028 12:37:14.286788    5536 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:14.287681    5536 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I1028 12:37:14.287753    5536 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I1028 12:37:14.287753    5536 certs.go:256] generating profile certs ...
	I1028 12:37:14.289010    5536 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\client.key
	I1028 12:37:14.289010    5536 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\client.crt with IP's: []
	I1028 12:37:14.594411    5536 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\client.crt ...
	I1028 12:37:14.594411    5536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\client.crt: {Name:mk1f5e585e0e9ad0432871d547ee6c6b1ba991a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:14.596368    5536 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\client.key ...
	I1028 12:37:14.596368    5536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\client.key: {Name:mk8fa754fe6c198907533302a4c7b316f4588580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:14.597362    5536 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.key.c3f56259
	I1028 12:37:14.598011    5536 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.crt.c3f56259 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.244.98]
	I1028 12:37:14.791678    5536 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.crt.c3f56259 ...
	I1028 12:37:14.791678    5536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.crt.c3f56259: {Name:mka8e1efedf7e1deef86e5fd8565257166d7c19e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:14.793243    5536 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.key.c3f56259 ...
	I1028 12:37:14.793243    5536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.key.c3f56259: {Name:mke3e5e7965f50f386299c9c24bf21b96f6b90ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:14.794249    5536 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.crt.c3f56259 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.crt
	I1028 12:37:14.807353    5536 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.key.c3f56259 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.key
	I1028 12:37:14.809346    5536 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.key
	I1028 12:37:14.809346    5536 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.crt with IP's: []
	I1028 12:37:15.045583    5536 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.crt ...
	I1028 12:37:15.045583    5536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.crt: {Name:mkfe7fa30da62946c38b24010c9b77700ad691e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:15.046648    5536 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.key ...
	I1028 12:37:15.046648    5536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.key: {Name:mk70e1f18db667753ce0e2dac5958f21cb8425aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:15.047732    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 12:37:15.048403    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1028 12:37:15.048665    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 12:37:15.048803    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 12:37:15.048962    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 12:37:15.048962    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 12:37:15.048962    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 12:37:15.060654    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 12:37:15.061615    5536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem (1338 bytes)
	W1028 12:37:15.061615    5536 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608_empty.pem, impossibly tiny 0 bytes
	I1028 12:37:15.061615    5536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1028 12:37:15.062702    5536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1028 12:37:15.062971    5536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1028 12:37:15.063259    5536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1028 12:37:15.063530    5536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem (1708 bytes)
	I1028 12:37:15.063530    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem -> /usr/share/ca-certificates/9608.pem
	I1028 12:37:15.063530    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /usr/share/ca-certificates/96082.pem
	I1028 12:37:15.064340    5536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:37:15.064710    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:37:15.116149    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:37:15.169421    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:37:15.224260    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 12:37:15.273495    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 12:37:15.321830    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:37:15.377389    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:37:15.419419    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:37:15.472334    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9608.pem --> /usr/share/ca-certificates/9608.pem (1338 bytes)
	I1028 12:37:15.523430    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /usr/share/ca-certificates/96082.pem (1708 bytes)
	I1028 12:37:15.572078    5536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:37:15.622398    5536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:37:15.668379    5536 ssh_runner.go:195] Run: openssl version
	I1028 12:37:15.677497    5536 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1028 12:37:15.689328    5536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9608.pem && ln -fs /usr/share/ca-certificates/9608.pem /etc/ssl/certs/9608.pem"
	I1028 12:37:15.725265    5536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9608.pem
	I1028 12:37:15.732223    5536 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 28 11:03 /usr/share/ca-certificates/9608.pem
	I1028 12:37:15.733120    5536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:03 /usr/share/ca-certificates/9608.pem
	I1028 12:37:15.748148    5536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9608.pem
	I1028 12:37:15.757749    5536 command_runner.go:130] > 51391683
	I1028 12:37:15.770018    5536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9608.pem /etc/ssl/certs/51391683.0"
	I1028 12:37:15.802724    5536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96082.pem && ln -fs /usr/share/ca-certificates/96082.pem /etc/ssl/certs/96082.pem"
	I1028 12:37:15.835085    5536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96082.pem
	I1028 12:37:15.842025    5536 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 28 11:03 /usr/share/ca-certificates/96082.pem
	I1028 12:37:15.842025    5536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:03 /usr/share/ca-certificates/96082.pem
	I1028 12:37:15.853022    5536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96082.pem
	I1028 12:37:15.862933    5536 command_runner.go:130] > 3ec20f2e
	I1028 12:37:15.874352    5536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96082.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:37:15.911805    5536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:37:15.946874    5536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:37:15.954118    5536 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 28 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:37:15.954211    5536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:37:15.965557    5536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:37:15.975907    5536 command_runner.go:130] > b5213941
	I1028 12:37:15.987441    5536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:37:16.021190    5536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:37:16.029055    5536 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 12:37:16.029055    5536 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 12:37:16.029055    5536 kubeadm.go:392] StartCluster: {Name:multinode-071500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.2 ClusterName:multinode-071500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.244.98 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:37:16.043248    5536 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 12:37:16.093845    5536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:37:16.111538    5536 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1028 12:37:16.111538    5536 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1028 12:37:16.111538    5536 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1028 12:37:16.124364    5536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:37:16.155900    5536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:37:16.178484    5536 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1028 12:37:16.178484    5536 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1028 12:37:16.178484    5536 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1028 12:37:16.178484    5536 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:37:16.179481    5536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:37:16.179481    5536 kubeadm.go:157] found existing configuration files:
	
	I1028 12:37:16.191620    5536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:37:16.210306    5536 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:37:16.211312    5536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:37:16.222304    5536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:37:16.254282    5536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:37:16.274109    5536 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:37:16.274109    5536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:37:16.286292    5536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:37:16.318848    5536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:37:16.336488    5536 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:37:16.336488    5536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:37:16.347529    5536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:37:16.377438    5536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:37:16.396526    5536 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:37:16.397077    5536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:37:16.409528    5536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:37:16.428468    5536 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:37:16.924653    5536 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:37:16.924702    5536 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:37:31.100298    5536 command_runner.go:130] > [init] Using Kubernetes version: v1.31.2
	I1028 12:37:31.100368    5536 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 12:37:31.100506    5536 command_runner.go:130] > [preflight] Running pre-flight checks
	I1028 12:37:31.100563    5536 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:37:31.100738    5536 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:37:31.100770    5536 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:37:31.100891    5536 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:37:31.100953    5536 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:37:31.101098    5536 command_runner.go:130] > [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:37:31.101180    5536 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:37:31.101428    5536 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:37:31.101428    5536 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:37:31.107557    5536 out.go:235]   - Generating certificates and keys ...
	I1028 12:37:31.107557    5536 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1028 12:37:31.107557    5536 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:37:31.107557    5536 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:37:31.107557    5536 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1028 12:37:31.107557    5536 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 12:37:31.107557    5536 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 12:37:31.107557    5536 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 12:37:31.107557    5536 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1028 12:37:31.107557    5536 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 12:37:31.107557    5536 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1028 12:37:31.107557    5536 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1028 12:37:31.107557    5536 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 12:37:31.108998    5536 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1028 12:37:31.108998    5536 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 12:37:31.109273    5536 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-071500] and IPs [172.27.244.98 127.0.0.1 ::1]
	I1028 12:37:31.109273    5536 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-071500] and IPs [172.27.244.98 127.0.0.1 ::1]
	I1028 12:37:31.109273    5536 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 12:37:31.109273    5536 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1028 12:37:31.109892    5536 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-071500] and IPs [172.27.244.98 127.0.0.1 ::1]
	I1028 12:37:31.109892    5536 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-071500] and IPs [172.27.244.98 127.0.0.1 ::1]
	I1028 12:37:31.109892    5536 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 12:37:31.109892    5536 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 12:37:31.109892    5536 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 12:37:31.109892    5536 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 12:37:31.109892    5536 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 12:37:31.109892    5536 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1028 12:37:31.110447    5536 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:37:31.110447    5536 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:37:31.110596    5536 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:37:31.110660    5536 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:37:31.110820    5536 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:37:31.110820    5536 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:37:31.110820    5536 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:37:31.110820    5536 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:37:31.111205    5536 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:37:31.111205    5536 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:37:31.111480    5536 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:37:31.111480    5536 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:37:31.111773    5536 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:37:31.111773    5536 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:37:31.112001    5536 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:37:31.112001    5536 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:37:31.117218    5536 out.go:235]   - Booting up control plane ...
	I1028 12:37:31.118311    5536 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:37:31.118311    5536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:37:31.118311    5536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:37:31.118311    5536 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:37:31.118311    5536 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:37:31.118311    5536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:37:31.118311    5536 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:37:31.119112    5536 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:37:31.119364    5536 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:37:31.119364    5536 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:37:31.119537    5536 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:37:31.119537    5536 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1028 12:37:31.119657    5536 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:37:31.119657    5536 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:37:31.119657    5536 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:37:31.119657    5536 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:37:31.120281    5536 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.907019ms
	I1028 12:37:31.120281    5536 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.907019ms
	I1028 12:37:31.120476    5536 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:37:31.120476    5536 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:37:31.120523    5536 command_runner.go:130] > [api-check] The API server is healthy after 7.502651172s
	I1028 12:37:31.120523    5536 kubeadm.go:310] [api-check] The API server is healthy after 7.502651172s
	I1028 12:37:31.120816    5536 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:37:31.120816    5536 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:37:31.121113    5536 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:37:31.121113    5536 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:37:31.121113    5536 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:37:31.121113    5536 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:37:31.121614    5536 command_runner.go:130] > [mark-control-plane] Marking the node multinode-071500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:37:31.121614    5536 kubeadm.go:310] [mark-control-plane] Marking the node multinode-071500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:37:31.121614    5536 command_runner.go:130] > [bootstrap-token] Using token: pbes7g.dxbg0wnwf67644gb
	I1028 12:37:31.121614    5536 kubeadm.go:310] [bootstrap-token] Using token: pbes7g.dxbg0wnwf67644gb
	I1028 12:37:31.125981    5536 out.go:235]   - Configuring RBAC rules ...
	I1028 12:37:31.127043    5536 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:37:31.127043    5536 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:37:31.127226    5536 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:37:31.127349    5536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:37:31.127592    5536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:37:31.127592    5536 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:37:31.127842    5536 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:37:31.127842    5536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:37:31.128115    5536 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:37:31.128115    5536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:37:31.128512    5536 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:37:31.128512    5536 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:37:31.128749    5536 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:37:31.128749    5536 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:37:31.128749    5536 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1028 12:37:31.128749    5536 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 12:37:31.128749    5536 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1028 12:37:31.128749    5536 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 12:37:31.128749    5536 kubeadm.go:310] 
	I1028 12:37:31.129298    5536 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1028 12:37:31.129298    5536 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 12:37:31.129298    5536 kubeadm.go:310] 
	I1028 12:37:31.129489    5536 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 12:37:31.129489    5536 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1028 12:37:31.129489    5536 kubeadm.go:310] 
	I1028 12:37:31.129489    5536 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 12:37:31.129489    5536 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1028 12:37:31.129809    5536 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:37:31.129809    5536 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:37:31.130024    5536 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:37:31.130024    5536 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:37:31.130024    5536 kubeadm.go:310] 
	I1028 12:37:31.130222    5536 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 12:37:31.130222    5536 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1028 12:37:31.130222    5536 kubeadm.go:310] 
	I1028 12:37:31.130222    5536 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:37:31.130471    5536 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:37:31.130577    5536 kubeadm.go:310] 
	I1028 12:37:31.130687    5536 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 12:37:31.130687    5536 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1028 12:37:31.130687    5536 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:37:31.130687    5536 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:37:31.130687    5536 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:37:31.130687    5536 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:37:31.130687    5536 kubeadm.go:310] 
	I1028 12:37:31.130687    5536 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:37:31.131230    5536 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:37:31.131445    5536 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1028 12:37:31.131445    5536 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 12:37:31.131445    5536 kubeadm.go:310] 
	I1028 12:37:31.131661    5536 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pbes7g.dxbg0wnwf67644gb \
	I1028 12:37:31.131661    5536 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token pbes7g.dxbg0wnwf67644gb \
	I1028 12:37:31.131920    5536 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b \
	I1028 12:37:31.131920    5536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b \
	I1028 12:37:31.131920    5536 command_runner.go:130] > 	--control-plane 
	I1028 12:37:31.131920    5536 kubeadm.go:310] 	--control-plane 
	I1028 12:37:31.131920    5536 kubeadm.go:310] 
	I1028 12:37:31.132299    5536 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:37:31.132299    5536 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:37:31.132299    5536 kubeadm.go:310] 
	I1028 12:37:31.132433    5536 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token pbes7g.dxbg0wnwf67644gb \
	I1028 12:37:31.132467    5536 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pbes7g.dxbg0wnwf67644gb \
	I1028 12:37:31.132562    5536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b 
	I1028 12:37:31.132562    5536 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:eb7fc25a690a0e76a7d2feda31995e2d0bc365bbb0fa96d3c4327375dc85602b 
	I1028 12:37:31.132562    5536 cni.go:84] Creating CNI manager for ""
	I1028 12:37:31.132562    5536 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 12:37:31.134922    5536 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 12:37:31.150337    5536 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 12:37:31.158678    5536 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1028 12:37:31.158678    5536 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I1028 12:37:31.158678    5536 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I1028 12:37:31.158888    5536 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1028 12:37:31.158888    5536 command_runner.go:130] > Access: 2024-10-28 12:35:34.092376600 +0000
	I1028 12:37:31.158888    5536 command_runner.go:130] > Modify: 2024-10-15 20:14:00.000000000 +0000
	I1028 12:37:31.158888    5536 command_runner.go:130] > Change: 2024-10-28 12:35:25.488000000 +0000
	I1028 12:37:31.158888    5536 command_runner.go:130] >  Birth: -
	I1028 12:37:31.159021    5536 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 12:37:31.159099    5536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 12:37:31.215841    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 12:37:32.010863    5536 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1028 12:37:32.010863    5536 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1028 12:37:32.010863    5536 command_runner.go:130] > serviceaccount/kindnet created
	I1028 12:37:32.010863    5536 command_runner.go:130] > daemonset.apps/kindnet created
	I1028 12:37:32.011046    5536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:37:32.025538    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:32.027637    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-071500 minikube.k8s.io/updated_at=2024_10_28T12_37_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=multinode-071500 minikube.k8s.io/primary=true
	I1028 12:37:32.048155    5536 command_runner.go:130] > -16
	I1028 12:37:32.048318    5536 ops.go:34] apiserver oom_adj: -16
	I1028 12:37:32.205967    5536 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1028 12:37:32.216569    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:32.259764    5536 command_runner.go:130] > node/multinode-071500 labeled
	I1028 12:37:32.359935    5536 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1028 12:37:32.718905    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:32.848265    5536 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1028 12:37:33.218286    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:33.332739    5536 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1028 12:37:33.718812    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:33.864455    5536 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1028 12:37:34.218310    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:34.334176    5536 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1028 12:37:34.717954    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:34.832944    5536 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1028 12:37:35.218331    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:35.347964    5536 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1028 12:37:35.720077    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:35.905700    5536 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1028 12:37:36.216691    5536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:37:36.393053    5536 command_runner.go:130] > NAME      SECRETS   AGE
	I1028 12:37:36.393053    5536 command_runner.go:130] > default   0         1s
	I1028 12:37:36.396072    5536 kubeadm.go:1113] duration metric: took 4.3849762s to wait for elevateKubeSystemPrivileges
	I1028 12:37:36.396179    5536 kubeadm.go:394] duration metric: took 20.3668939s to StartCluster
	I1028 12:37:36.396179    5536 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:36.396456    5536 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 12:37:36.399063    5536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:36.400502    5536 start.go:235] Will wait 6m0s for node &{Name: IP:172.27.244.98 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 12:37:36.400502    5536 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:37:36.400502    5536 addons.go:69] Setting storage-provisioner=true in profile "multinode-071500"
	I1028 12:37:36.401212    5536 addons.go:234] Setting addon storage-provisioner=true in "multinode-071500"
	I1028 12:37:36.401212    5536 host.go:66] Checking if "multinode-071500" exists ...
	I1028 12:37:36.400502    5536 addons.go:69] Setting default-storageclass=true in profile "multinode-071500"
	I1028 12:37:36.401212    5536 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-071500"
	I1028 12:37:36.401212    5536 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:37:36.401938    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:37:36.401938    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:37:36.404336    5536 out.go:177] * Verifying Kubernetes components...
	I1028 12:37:36.429978    5536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:37:36.857284    5536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:37:36.910213    5536 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 12:37:36.911249    5536 kapi.go:59] client config for multinode-071500: &rest.Config{Host:"https://172.27.244.98:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-071500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-071500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29cb3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 12:37:36.913828    5536 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 12:37:36.916837    5536 node_ready.go:35] waiting up to 6m0s for node "multinode-071500" to be "Ready" ...
	I1028 12:37:36.916837    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:36.916837    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:36.916837    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:36.916837    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:36.946331    5536 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I1028 12:37:36.946331    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:36.946331    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:36.946331    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:36 GMT
	I1028 12:37:36.946331    5536 round_trippers.go:580]     Audit-Id: 13a439ec-b907-4f16-962f-6da84ad0663a
	I1028 12:37:36.946331    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:36.946331    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:36.946331    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:36.946869    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:37.417826    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:37.417826    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:37.417826    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:37.417826    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:37.428199    5536 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1028 12:37:37.428332    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:37.428332    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:37.428332    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:37.428332    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:37.428332    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:37 GMT
	I1028 12:37:37.428332    5536 round_trippers.go:580]     Audit-Id: de281d49-2b8c-4166-864e-7eae9acd0b40
	I1028 12:37:37.428332    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:37.429228    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:37.917808    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:37.917808    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:37.917808    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:37.917808    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:37.923211    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:37.923300    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:37.923300    5536 round_trippers.go:580]     Audit-Id: 315c1fad-90e1-49fd-83f4-a2e4746b5d55
	I1028 12:37:37.923300    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:37.923300    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:37.923300    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:37.923300    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:37.923300    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:37 GMT
	I1028 12:37:37.923685    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:38.418206    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:38.418206    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:38.418206    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:38.418206    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:38.422841    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:38.422955    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:38.423027    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:38 GMT
	I1028 12:37:38.423027    5536 round_trippers.go:580]     Audit-Id: dd40aad6-d607-403b-83c7-35647833649a
	I1028 12:37:38.423027    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:38.423090    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:38.423090    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:38.423090    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:38.423353    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:38.770369    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:37:38.770369    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:37:38.773361    5536 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:37:38.775361    5536 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:37:38.775361    5536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:37:38.775361    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:37:38.954378    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:38.954378    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:38.954378    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:38.954378    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:38.965371    5536 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1028 12:37:38.965371    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:38.965371    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:38.965371    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:38 GMT
	I1028 12:37:38.965371    5536 round_trippers.go:580]     Audit-Id: c253f90b-9a90-4032-9191-0d5c8f130bea
	I1028 12:37:38.965371    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:38.965371    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:38.965371    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:38.965371    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:38.966374    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:38.982786    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:37:38.982786    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:37:38.983650    5536 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 12:37:38.984411    5536 kapi.go:59] client config for multinode-071500: &rest.Config{Host:"https://172.27.244.98:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-071500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-071500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29cb3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 12:37:38.985810    5536 addons.go:234] Setting addon default-storageclass=true in "multinode-071500"
	I1028 12:37:38.985810    5536 host.go:66] Checking if "multinode-071500" exists ...
	I1028 12:37:38.986556    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:37:39.418155    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:39.418155    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:39.418155    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:39.418155    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:39.423324    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:39.423324    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:39.423324    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:39.423324    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:39 GMT
	I1028 12:37:39.423324    5536 round_trippers.go:580]     Audit-Id: 11bf0cd5-6e33-43b8-b1ff-1c1207d2d78d
	I1028 12:37:39.423426    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:39.423426    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:39.423426    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:39.423929    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:39.917623    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:39.917623    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:39.917623    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:39.917623    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:39.922072    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:39.922072    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:39.922072    5536 round_trippers.go:580]     Audit-Id: 54c92998-a915-4dde-ad0e-de6bb6b75701
	I1028 12:37:39.922072    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:39.922072    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:39.922072    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:39.922072    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:39.922072    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:39 GMT
	I1028 12:37:39.922864    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:40.417917    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:40.417917    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:40.417917    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:40.417917    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:40.423318    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:40.423318    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:40.423318    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:40.423318    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:40.423318    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:40.423318    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:40 GMT
	I1028 12:37:40.423318    5536 round_trippers.go:580]     Audit-Id: 26e8d931-18e5-4909-aacc-28c97dc00a2b
	I1028 12:37:40.423318    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:40.423911    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:40.917365    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:40.917365    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:40.917365    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:40.917365    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:40.922654    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:40.922794    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:40.922794    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:40.922861    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:40 GMT
	I1028 12:37:40.922861    5536 round_trippers.go:580]     Audit-Id: dcf84a97-d333-497e-af40-52abb6836f11
	I1028 12:37:40.922861    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:40.922861    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:40.922861    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:40.923111    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:41.269550    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:37:41.269550    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:37:41.269650    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:37:41.394147    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:37:41.394147    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:37:41.394225    5536 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:37:41.394225    5536 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:37:41.394225    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:37:41.417271    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:41.417271    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:41.417271    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:41.417271    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:41.423434    5536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 12:37:41.423434    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:41.423566    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:41.423566    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:41.423566    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:41 GMT
	I1028 12:37:41.423566    5536 round_trippers.go:580]     Audit-Id: 04bb6ff6-0c2a-4d9a-8f3d-2ce48de01279
	I1028 12:37:41.423566    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:41.423566    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:41.424442    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:41.425164    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:41.917272    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:41.917272    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:41.917272    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:41.917272    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:41.920301    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:41.920301    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:41.920301    5536 round_trippers.go:580]     Audit-Id: 5daa03f7-44ed-4e70-89e7-117527675faa
	I1028 12:37:41.920301    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:41.920301    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:41.920301    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:41.920301    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:41.920301    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:41 GMT
	I1028 12:37:41.921311    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:42.418285    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:42.418285    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:42.418285    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:42.418285    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:42.423806    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:42.423806    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:42.423913    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:42 GMT
	I1028 12:37:42.423913    5536 round_trippers.go:580]     Audit-Id: 769b8895-5aa0-4a3a-b1ef-eff9550c8432
	I1028 12:37:42.424120    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:42.424120    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:42.424232    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:42.424232    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:42.425586    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:42.917897    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:42.917897    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:42.917897    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:42.917897    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:42.922254    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:42.922318    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:42.922318    5536 round_trippers.go:580]     Audit-Id: f2f91884-e6c2-41d6-bf3c-a1665ec41df1
	I1028 12:37:42.922318    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:42.922318    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:42.922318    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:42.922318    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:42.922318    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:42 GMT
	I1028 12:37:42.922594    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:43.417872    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:43.417872    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:43.417872    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:43.417872    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:43.423144    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:43.423144    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:43.423144    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:43.423234    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:43.423234    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:43 GMT
	I1028 12:37:43.423234    5536 round_trippers.go:580]     Audit-Id: 55f2a4dd-63e5-4c79-86d1-c34480cf5d94
	I1028 12:37:43.423234    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:43.423234    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:43.423681    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:43.720029    5536 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:37:43.720029    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:37:43.720129    5536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:37:43.917499    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:43.917499    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:43.917499    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:43.917499    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:44.021466    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:37:44.021466    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:37:44.023015    5536 sshutil.go:53] new ssh client: &{IP:172.27.244.98 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:37:44.135798    5536 round_trippers.go:574] Response Status: 200 OK in 218 milliseconds
	I1028 12:37:44.135798    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:44.135798    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:44.135798    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:44.135798    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:44 GMT
	I1028 12:37:44.135798    5536 round_trippers.go:580]     Audit-Id: acd70d5d-993b-4752-ad93-e9477064e77b
	I1028 12:37:44.135798    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:44.135798    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:44.136800    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:44.136800    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:44.173816    5536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:37:44.417577    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:44.417577    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:44.417577    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:44.417577    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:44.428062    5536 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1028 12:37:44.428161    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:44.428161    5536 round_trippers.go:580]     Audit-Id: 69d74b5e-baad-4175-a585-8199d7c70df6
	I1028 12:37:44.428161    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:44.428161    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:44.428266    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:44.428266    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:44.428266    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:44 GMT
	I1028 12:37:44.431704    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:44.843361    5536 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1028 12:37:44.843501    5536 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1028 12:37:44.843501    5536 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1028 12:37:44.843501    5536 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1028 12:37:44.843501    5536 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1028 12:37:44.843501    5536 command_runner.go:130] > pod/storage-provisioner created
	I1028 12:37:44.916989    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:44.916989    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:44.916989    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:44.916989    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:44.920985    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:44.921611    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:44.921611    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:44.921611    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:44.921611    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:44 GMT
	I1028 12:37:44.921611    5536 round_trippers.go:580]     Audit-Id: feb5aeb9-98ce-4f04-9924-2e80dec153be
	I1028 12:37:44.921611    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:44.921611    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:44.922132    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:45.417238    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:45.417238    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:45.417238    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:45.417238    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:45.422779    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:45.422779    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:45.422779    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:45.422779    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:45.422779    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:45.422779    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:45.423015    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:45 GMT
	I1028 12:37:45.423015    5536 round_trippers.go:580]     Audit-Id: fba930e6-bd4d-486b-a240-6142705fb9be
	I1028 12:37:45.424058    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:45.917497    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:45.917497    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:45.917497    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:45.917497    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:45.922452    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:45.922577    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:45.922577    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:45.922577    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:45 GMT
	I1028 12:37:45.922577    5536 round_trippers.go:580]     Audit-Id: 54158cca-94b6-442f-b57a-4ea5fc4faa01
	I1028 12:37:45.922577    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:45.922577    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:45.922577    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:45.922577    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:46.345098    5536 main.go:141] libmachine: [stdout =====>] : 172.27.244.98
	
	I1028 12:37:46.345098    5536 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:37:46.346577    5536 sshutil.go:53] new ssh client: &{IP:172.27.244.98 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:37:46.417878    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:46.417878    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:46.417878    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:46.417878    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:46.422079    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:46.422079    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:46.422079    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:46.422079    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:46 GMT
	I1028 12:37:46.422079    5536 round_trippers.go:580]     Audit-Id: 32772b43-c260-4e49-8b78-5bb7a744b894
	I1028 12:37:46.422079    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:46.422079    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:46.422079    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:46.422079    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:46.422985    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:46.480260    5536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:37:46.645584    5536 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1028 12:37:46.645584    5536 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 12:37:46.645584    5536 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 12:37:46.646580    5536 round_trippers.go:463] GET https://172.27.244.98:8443/apis/storage.k8s.io/v1/storageclasses
	I1028 12:37:46.646580    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:46.646580    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:46.646580    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:46.650458    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:46.650531    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:46.650531    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:46.650531    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:46.650531    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:46.650531    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:46.650531    5536 round_trippers.go:580]     Content-Length: 1273
	I1028 12:37:46.650531    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:46 GMT
	I1028 12:37:46.650531    5536 round_trippers.go:580]     Audit-Id: bd43eef9-70df-416d-819b-444843a35d1c
	I1028 12:37:46.650531    5536 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"405"},"items":[{"metadata":{"name":"standard","uid":"b8cab4a0-88a1-4868-a1a9-823301b9aaf3","resourceVersion":"405","creationTimestamp":"2024-10-28T12:37:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-28T12:37:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1028 12:37:46.651434    5536 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b8cab4a0-88a1-4868-a1a9-823301b9aaf3","resourceVersion":"405","creationTimestamp":"2024-10-28T12:37:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-28T12:37:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1028 12:37:46.651434    5536 round_trippers.go:463] PUT https://172.27.244.98:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 12:37:46.651434    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:46.651434    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:46.651434    5536 round_trippers.go:473]     Content-Type: application/json
	I1028 12:37:46.651434    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:46.655347    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:46.655431    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:46.655431    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:46.655431    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:46.655431    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:46.655431    5536 round_trippers.go:580]     Content-Length: 1220
	I1028 12:37:46.655431    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:46 GMT
	I1028 12:37:46.655431    5536 round_trippers.go:580]     Audit-Id: 1d5a47d0-08fd-4583-a252-45d96ff04aa1
	I1028 12:37:46.655431    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:46.655431    5536 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b8cab4a0-88a1-4868-a1a9-823301b9aaf3","resourceVersion":"405","creationTimestamp":"2024-10-28T12:37:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-28T12:37:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1028 12:37:46.658587    5536 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 12:37:46.662914    5536 addons.go:510] duration metric: took 10.2622955s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 12:37:46.917326    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:46.917326    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:46.917326    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:46.917326    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:46.921733    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:46.922255    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:46.922255    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:46.922255    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:46.922255    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:46.922255    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:46.922255    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:46 GMT
	I1028 12:37:46.922255    5536 round_trippers.go:580]     Audit-Id: a4da76dc-e390-4d81-bc00-39c695dfc6b5
	I1028 12:37:46.922615    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:47.417210    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:47.417847    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:47.417847    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:47.417847    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:47.421611    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:47.421611    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:47.421611    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:47.421611    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:47 GMT
	I1028 12:37:47.421611    5536 round_trippers.go:580]     Audit-Id: 802cd577-73e8-4ee7-bf57-aa2cba12dac4
	I1028 12:37:47.421611    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:47.421611    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:47.421611    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:47.421829    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:47.918058    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:47.918058    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:47.918058    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:47.918058    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:47.922893    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:47.923004    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:47.923004    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:47.923004    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:47.923004    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:47 GMT
	I1028 12:37:47.923004    5536 round_trippers.go:580]     Audit-Id: fcaac02e-9221-4516-9547-bb246eee81fc
	I1028 12:37:47.923004    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:47.923004    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:47.923238    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:48.417365    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:48.417365    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:48.417365    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:48.417365    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:48.422636    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:48.422736    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:48.422736    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:48.422736    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:48 GMT
	I1028 12:37:48.422736    5536 round_trippers.go:580]     Audit-Id: a4b8fbba-5827-4e92-9107-47b52443cd53
	I1028 12:37:48.422736    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:48.422736    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:48.422736    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:48.422941    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:48.423483    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:48.917344    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:48.917877    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:48.917877    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:48.917877    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:48.921988    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:48.921988    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:48.921988    5536 round_trippers.go:580]     Audit-Id: d2895e7b-e5fc-42bd-8dde-df191967c52a
	I1028 12:37:48.921988    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:48.921988    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:48.921988    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:48.921988    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:48.921988    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:48 GMT
	I1028 12:37:48.922564    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:49.417207    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:49.417207    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:49.417207    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:49.417207    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:49.422680    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:49.422680    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:49.422829    5536 round_trippers.go:580]     Audit-Id: 9afcc0ce-6f3b-43c6-9d69-77b26d48829d
	I1028 12:37:49.422829    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:49.422829    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:49.422829    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:49.422829    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:49.422829    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:49 GMT
	I1028 12:37:49.422958    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:49.917320    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:49.917320    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:49.917320    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:49.917320    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:49.922860    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:49.922860    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:49.922860    5536 round_trippers.go:580]     Audit-Id: a3119704-f067-4464-9ed5-befc4e973211
	I1028 12:37:49.922860    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:49.922860    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:49.922860    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:49.922860    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:49.923833    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:49 GMT
	I1028 12:37:49.923833    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:50.418192    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:50.418192    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:50.418192    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:50.418192    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:50.423961    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:50.424066    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:50.424066    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:50 GMT
	I1028 12:37:50.424066    5536 round_trippers.go:580]     Audit-Id: fe0775c1-cf32-4aa2-9b7e-45be79b0a419
	I1028 12:37:50.424066    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:50.424066    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:50.424066    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:50.424066    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:50.424457    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:50.424772    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:50.917373    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:50.917373    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:50.917373    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:50.917373    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:50.922342    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:50.922430    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:50.922482    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:50.922482    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:50.922482    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:50.922482    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:50 GMT
	I1028 12:37:50.922482    5536 round_trippers.go:580]     Audit-Id: 3c40a406-e32f-4f38-9b50-ecad14338b90
	I1028 12:37:50.922482    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:50.922746    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:51.417190    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:51.417190    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:51.417190    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:51.417190    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:51.422738    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:51.422738    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:51.422738    5536 round_trippers.go:580]     Audit-Id: a635f7e2-e822-44e6-8c42-0ba700ca8814
	I1028 12:37:51.422738    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:51.422738    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:51.422738    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:51.422738    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:51.422738    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:51 GMT
	I1028 12:37:51.423150    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:51.917349    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:51.917349    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:51.917349    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:51.917349    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:51.921498    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:51.921498    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:51.921498    5536 round_trippers.go:580]     Audit-Id: b76e229c-3dff-4b03-bc46-fc353bf321c0
	I1028 12:37:51.921498    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:51.921498    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:51.921649    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:51.921649    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:51.921649    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:51 GMT
	I1028 12:37:51.921979    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:52.418023    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:52.418023    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:52.418023    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:52.418023    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:52.423508    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:52.423583    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:52.423583    5536 round_trippers.go:580]     Audit-Id: 8d506957-7b45-4f41-b468-bcd5fa83d12a
	I1028 12:37:52.423583    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:52.423583    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:52.423583    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:52.423583    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:52.423583    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:52 GMT
	I1028 12:37:52.424149    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:52.917152    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:52.917152    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:52.917152    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:52.917152    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:52.922991    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:52.922991    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:52.922991    5536 round_trippers.go:580]     Audit-Id: 011922bc-880f-488f-a4da-532ee8f0bf2b
	I1028 12:37:52.923081    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:52.923081    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:52.923081    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:52.923081    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:52.923081    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:52 GMT
	I1028 12:37:52.923572    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:52.923834    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:53.417623    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:53.417623    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:53.417722    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:53.417722    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:53.421753    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:53.421753    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:53.421753    5536 round_trippers.go:580]     Audit-Id: 80fb923e-8edc-40ff-a850-fbf6986b0c07
	I1028 12:37:53.421840    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:53.421840    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:53.421840    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:53.421840    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:53.421840    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:53 GMT
	I1028 12:37:53.422354    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:53.917230    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:53.917230    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:53.917230    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:53.917230    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:53.922722    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:53.923259    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:53.923259    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:53 GMT
	I1028 12:37:53.923259    5536 round_trippers.go:580]     Audit-Id: c95063d9-1566-4250-a4fe-85c56f2c8ad0
	I1028 12:37:53.923259    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:53.923259    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:53.923352    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:53.923352    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:53.923570    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:54.417126    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:54.417949    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:54.417949    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:54.417949    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:54.421255    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:54.422102    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:54.422102    5536 round_trippers.go:580]     Audit-Id: 4e9882e0-61eb-4418-b1fe-29e24956b5c5
	I1028 12:37:54.422102    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:54.422102    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:54.422102    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:54.422102    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:54.422102    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:54 GMT
	I1028 12:37:54.422536    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:54.917865    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:54.917865    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:54.917865    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:54.917865    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:54.938043    5536 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1028 12:37:54.938043    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:54.938043    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:54.938143    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:54.938143    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:54.938143    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:54.938143    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:54 GMT
	I1028 12:37:54.938143    5536 round_trippers.go:580]     Audit-Id: 9b1b7075-8a96-4adb-a6b2-29491215d2c9
	I1028 12:37:54.938550    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:54.939107    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:55.417146    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:55.417146    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:55.417146    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:55.417146    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:55.422402    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:55.422861    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:55.422861    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:55 GMT
	I1028 12:37:55.422861    5536 round_trippers.go:580]     Audit-Id: 6ab632ea-c495-4d17-80dc-77e29d268631
	I1028 12:37:55.422861    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:55.422861    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:55.422861    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:55.422861    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:55.423021    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:55.918660    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:55.918660    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:55.918660    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:55.918793    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:55.925641    5536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 12:37:55.925641    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:55.925641    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:55.925641    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:55 GMT
	I1028 12:37:55.925641    5536 round_trippers.go:580]     Audit-Id: 0f6fbcf3-7f4d-4f45-b925-27eb5c3a105a
	I1028 12:37:55.925641    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:55.925641    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:55.925641    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:55.925641    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:56.418369    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:56.418369    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:56.418369    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:56.418369    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:56.422261    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:56.422896    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:56.422896    5536 round_trippers.go:580]     Audit-Id: 2ef000cc-4270-44b8-b69e-f354a2858c86
	I1028 12:37:56.422896    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:56.422896    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:56.422896    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:56.423010    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:56.423010    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:56 GMT
	I1028 12:37:56.423341    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:56.917646    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:56.917728    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:56.917728    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:56.917728    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:56.921557    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:56.921557    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:56.921557    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:56.921557    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:56.921557    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:56.921557    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:56 GMT
	I1028 12:37:56.921557    5536 round_trippers.go:580]     Audit-Id: 771053e1-b0fd-4f9b-a932-a49b8e6a2736
	I1028 12:37:56.921557    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:56.921982    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:57.417561    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:57.417561    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:57.417561    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:57.417561    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:57.422350    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:57.422350    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:57.422350    5536 round_trippers.go:580]     Audit-Id: 07f7246f-d11f-4a63-b228-bad5b3d4c29c
	I1028 12:37:57.422350    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:57.422350    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:57.422350    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:57.422350    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:57.422350    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:57 GMT
	I1028 12:37:57.422678    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:57.422867    5536 node_ready.go:53] node "multinode-071500" has status "Ready":"False"
	I1028 12:37:57.917236    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:57.917811    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:57.917811    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:57.917811    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:57.922375    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:57.922518    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:57.922518    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:57.922577    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:57 GMT
	I1028 12:37:57.922577    5536 round_trippers.go:580]     Audit-Id: dea22cd0-853c-4ecd-981b-6df3f4753b3d
	I1028 12:37:57.922577    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:57.922577    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:57.922647    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:57.923083    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:58.417249    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:58.417772    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:58.417772    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:58.417772    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:58.421269    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:58.422257    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:58.422257    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:58 GMT
	I1028 12:37:58.422257    5536 round_trippers.go:580]     Audit-Id: eb19e890-4497-41c2-bb2c-b8d436b9fecf
	I1028 12:37:58.422257    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:58.422257    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:58.422257    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:58.422257    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:58.422406    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"352","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1028 12:37:58.917800    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:58.917800    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:58.917800    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:58.917800    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:58.922886    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:58.922886    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:58.922886    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:58.922886    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:58.922886    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:58 GMT
	I1028 12:37:58.922886    5536 round_trippers.go:580]     Audit-Id: e6e7cef2-5007-45d9-a6cd-67e56f69c797
	I1028 12:37:58.922886    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:58.922886    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:58.922886    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"412","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1028 12:37:58.923975    5536 node_ready.go:49] node "multinode-071500" has status "Ready":"True"
	I1028 12:37:58.923975    5536 node_ready.go:38] duration metric: took 22.0068894s for node "multinode-071500" to be "Ready" ...
	I1028 12:37:58.923975    5536 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:37:58.923975    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods
	I1028 12:37:58.923975    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:58.923975    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:58.923975    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:58.928398    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:37:58.928454    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:58.928454    5536 round_trippers.go:580]     Audit-Id: d5aa28a9-cefd-454d-b363-90d94b2ea667
	I1028 12:37:58.928454    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:58.928454    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:58.928454    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:58.928454    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:58.928454    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:58 GMT
	I1028 12:37:58.929755    5536 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"421"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"417","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65510 chars]
	I1028 12:37:58.935700    5536 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j8vdn" in "kube-system" namespace to be "Ready" ...
	I1028 12:37:58.935700    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j8vdn
	I1028 12:37:58.935700    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:58.935700    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:58.935700    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:58.939408    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:37:58.939408    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:58.939473    5536 round_trippers.go:580]     Audit-Id: 3e32ea8e-e381-45a8-bc05-294c5c14736c
	I1028 12:37:58.939473    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:58.939473    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:58.939473    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:58.939473    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:58.939473    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:58 GMT
	I1028 12:37:58.939589    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"417","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I1028 12:37:58.940265    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:58.940323    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:58.940323    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:58.940323    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:58.946996    5536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 12:37:58.946996    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:58.947112    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:58 GMT
	I1028 12:37:58.947112    5536 round_trippers.go:580]     Audit-Id: 312bc620-8df9-421b-8df6-0355a2b96de8
	I1028 12:37:58.947112    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:58.947112    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:58.947112    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:58.947112    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:58.947112    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"412","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1028 12:37:59.436166    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j8vdn
	I1028 12:37:59.436166    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:59.436166    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:59.436166    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:59.442166    5536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 12:37:59.442166    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:59.442166    5536 round_trippers.go:580]     Audit-Id: cc344c01-b200-45c5-98d4-948843aba777
	I1028 12:37:59.442166    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:59.442166    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:59.442166    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:59.442166    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:59.442166    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:59 GMT
	I1028 12:37:59.442166    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"417","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I1028 12:37:59.443405    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:59.443624    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:59.443624    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:59.443624    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:59.451930    5536 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 12:37:59.451930    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:59.451930    5536 round_trippers.go:580]     Audit-Id: 9b011c79-161f-4716-8904-e39bdcae869a
	I1028 12:37:59.451930    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:59.451930    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:59.451930    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:59.451930    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:59.451930    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:59 GMT
	I1028 12:37:59.452517    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"412","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1028 12:37:59.936552    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j8vdn
	I1028 12:37:59.936552    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:59.936552    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:59.936552    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:59.941667    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:37:59.941667    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:59.941667    5536 round_trippers.go:580]     Audit-Id: 12135fb7-c562-411f-bfa5-9fb521c9c5ba
	I1028 12:37:59.941667    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:59.941667    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:59.941751    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:59.941751    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:59.941751    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:59 GMT
	I1028 12:37:59.941936    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"417","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I1028 12:37:59.942932    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:37:59.943016    5536 round_trippers.go:469] Request Headers:
	I1028 12:37:59.943016    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:37:59.943016    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:37:59.945709    5536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 12:37:59.946409    5536 round_trippers.go:577] Response Headers:
	I1028 12:37:59.946409    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:37:59.946409    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:37:59.946409    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:37:59.946409    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:37:59.946409    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:37:59 GMT
	I1028 12:37:59.946409    5536 round_trippers.go:580]     Audit-Id: 328dbcfa-0af2-4428-9ada-a16fb320c79d
	I1028 12:37:59.951062    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"412","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1028 12:38:00.436528    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j8vdn
	I1028 12:38:00.436528    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:00.436528    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:00.436528    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:00.441027    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:38:00.441087    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:00.441087    5536 round_trippers.go:580]     Audit-Id: cf08cfaa-9386-4274-8b50-b3dd656d7b5d
	I1028 12:38:00.441087    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:00.441087    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:00.441087    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:00.441087    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:00.441087    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:00 GMT
	I1028 12:38:00.441087    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"437","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7063 chars]
	I1028 12:38:00.441856    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:00.441856    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:00.441856    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:00.441856    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:00.445490    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:38:00.445490    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:00.445490    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:00.445490    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:00.445490    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:00.445490    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:00 GMT
	I1028 12:38:00.445490    5536 round_trippers.go:580]     Audit-Id: c344ee00-a251-40bf-80a7-715814066b3e
	I1028 12:38:00.445490    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:00.446154    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"412","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1028 12:38:00.936690    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j8vdn
	I1028 12:38:00.936690    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:00.936690    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:00.936690    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:00.941706    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:38:00.941770    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:00.941770    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:00.941770    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:00.941770    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:00 GMT
	I1028 12:38:00.941770    5536 round_trippers.go:580]     Audit-Id: 22d1eea9-2802-42eb-8ee9-90466b3d8269
	I1028 12:38:00.941770    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:00.941770    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:00.941965    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"437","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7063 chars]
	I1028 12:38:00.941965    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:00.942705    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:00.942705    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:00.942705    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:00.948102    5536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 12:38:00.948102    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:00.948102    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:00.948102    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:00.948102    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:00 GMT
	I1028 12:38:00.948102    5536 round_trippers.go:580]     Audit-Id: d6fbc107-106f-4bcb-bc4d-95d89a1a2e67
	I1028 12:38:00.948102    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:00.948102    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:00.948455    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"412","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1028 12:38:00.948719    5536 pod_ready.go:103] pod "coredns-7c65d6cfc9-j8vdn" in "kube-system" namespace has status "Ready":"False"
	I1028 12:38:01.436109    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j8vdn
	I1028 12:38:01.436109    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.436109    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.436109    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.466757    5536 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I1028 12:38:01.466820    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.466820    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.466820    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.466820    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.466820    5536 round_trippers.go:580]     Audit-Id: e7b8af39-fd45-4be2-bc85-29997b1b880e
	I1028 12:38:01.466820    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.466820    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.471844    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"443","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6834 chars]
	I1028 12:38:01.472523    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:01.472523    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.472523    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.472523    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.476286    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:38:01.476534    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.476534    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.476534    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.476534    5536 round_trippers.go:580]     Audit-Id: 23a56b13-6a26-4235-9a63-dfe8c89b63ba
	I1028 12:38:01.476534    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.476607    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.476669    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.476916    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"447","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1028 12:38:01.477446    5536 pod_ready.go:93] pod "coredns-7c65d6cfc9-j8vdn" in "kube-system" namespace has status "Ready":"True"
	I1028 12:38:01.477446    5536 pod_ready.go:82] duration metric: took 2.5417173s for pod "coredns-7c65d6cfc9-j8vdn" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.477508    5536 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-w5gxr" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.477569    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-w5gxr
	I1028 12:38:01.477631    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.477631    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.477692    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.482391    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:38:01.482883    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.482883    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.482883    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.482883    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.482883    5536 round_trippers.go:580]     Audit-Id: 479da8b5-f766-48ce-9af0-5be9317d8663
	I1028 12:38:01.482883    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.482883    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.483054    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-w5gxr","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2c6bd789-f3d5-4c3c-9f69-6e5dbe1db852","resourceVersion":"450","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6834 chars]
	I1028 12:38:01.483737    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:01.483737    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.483737    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.483737    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.491656    5536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 12:38:01.491682    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.491682    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.491682    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.491682    5536 round_trippers.go:580]     Audit-Id: 916170ed-c647-4f63-b92c-5e3e16caaa21
	I1028 12:38:01.491682    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.491682    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.491682    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.491861    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"447","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1028 12:38:01.492458    5536 pod_ready.go:93] pod "coredns-7c65d6cfc9-w5gxr" in "kube-system" namespace has status "Ready":"True"
	I1028 12:38:01.492458    5536 pod_ready.go:82] duration metric: took 14.9497ms for pod "coredns-7c65d6cfc9-w5gxr" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.492458    5536 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-071500" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.492458    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-071500
	I1028 12:38:01.492458    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.492458    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.492458    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.496063    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:38:01.496217    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.496217    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.496217    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.496217    5536 round_trippers.go:580]     Audit-Id: 710f37c4-f0cb-4026-88c1-74a59474a51d
	I1028 12:38:01.496217    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.496217    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.496217    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.497044    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-071500","namespace":"kube-system","uid":"0def4362-6242-450b-a917-ea0720c76929","resourceVersion":"387","creationTimestamp":"2024-10-28T12:37:30Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.244.98:2379","kubernetes.io/config.hash":"f5222b65c24a069db70fce37c92f9fa9","kubernetes.io/config.mirror":"f5222b65c24a069db70fce37c92f9fa9","kubernetes.io/config.seen":"2024-10-28T12:37:30.614257114Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6465 chars]
	I1028 12:38:01.497627    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:01.497686    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.497686    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.497686    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.500045    5536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 12:38:01.500045    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.500045    5536 round_trippers.go:580]     Audit-Id: 5be1547d-3fae-4e6b-8fcc-29a0a3b63357
	I1028 12:38:01.500045    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.500045    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.500045    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.500045    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.500045    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.501168    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"447","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1028 12:38:01.501168    5536 pod_ready.go:93] pod "etcd-multinode-071500" in "kube-system" namespace has status "Ready":"True"
	I1028 12:38:01.501168    5536 pod_ready.go:82] duration metric: took 8.7107ms for pod "etcd-multinode-071500" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.501168    5536 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-071500" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.501168    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-071500
	I1028 12:38:01.501168    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.501168    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.501168    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.505088    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:38:01.505352    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.505352    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.505352    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.505352    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.505352    5536 round_trippers.go:580]     Audit-Id: 064abf4a-e977-4559-a65a-6a966a18f532
	I1028 12:38:01.505352    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.505352    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.505675    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-071500","namespace":"kube-system","uid":"0216da7d-e0eb-403f-927d-5bcd780c85bb","resourceVersion":"354","creationTimestamp":"2024-10-28T12:37:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.244.98:8443","kubernetes.io/config.hash":"2d85a7b9464ac245c51684738092f57c","kubernetes.io/config.mirror":"2d85a7b9464ac245c51684738092f57c","kubernetes.io/config.seen":"2024-10-28T12:37:22.124230458Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I1028 12:38:01.506307    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:01.506361    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.506361    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.506361    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.512077    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:38:01.512077    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.512077    5536 round_trippers.go:580]     Audit-Id: 1f0cbc0b-ee0b-4d71-85cc-098e41bf28a3
	I1028 12:38:01.512077    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.512077    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.512077    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.512077    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.512162    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.512462    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"447","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1028 12:38:01.512657    5536 pod_ready.go:93] pod "kube-apiserver-multinode-071500" in "kube-system" namespace has status "Ready":"True"
	I1028 12:38:01.512657    5536 pod_ready.go:82] duration metric: took 11.4886ms for pod "kube-apiserver-multinode-071500" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.512657    5536 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-071500" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.512657    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-071500
	I1028 12:38:01.512657    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.512657    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.512657    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.515332    5536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 12:38:01.515332    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.515332    5536 round_trippers.go:580]     Audit-Id: 9102c7e3-f1e7-4fe2-af82-41d579b8b3bb
	I1028 12:38:01.515332    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.515332    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.515332    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.515332    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.515332    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.515614    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-071500","namespace":"kube-system","uid":"f4f02743-40df-46cc-b3bf-39b846325812","resourceVersion":"383","creationTimestamp":"2024-10-28T12:37:29Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"55a02bb632fe7724e43bb68086d66024","kubernetes.io/config.mirror":"55a02bb632fe7724e43bb68086d66024","kubernetes.io/config.seen":"2024-10-28T12:37:22.124231758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I1028 12:38:01.516207    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:01.516251    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.516251    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.516251    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.523038    5536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 12:38:01.523038    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.523038    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.523038    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.523038    5536 round_trippers.go:580]     Audit-Id: cdc0c1c2-9b03-41d3-b136-b7341f9f1e40
	I1028 12:38:01.523038    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.523038    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.523038    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.523038    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"447","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1028 12:38:01.523038    5536 pod_ready.go:93] pod "kube-controller-manager-multinode-071500" in "kube-system" namespace has status "Ready":"True"
	I1028 12:38:01.523038    5536 pod_ready.go:82] duration metric: took 10.3807ms for pod "kube-controller-manager-multinode-071500" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.523038    5536 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tgw89" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.637745    5536 request.go:632] Waited for 114.7056ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tgw89
	I1028 12:38:01.637745    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tgw89
	I1028 12:38:01.637745    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.637745    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.637745    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.642378    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:38:01.642378    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.642378    5536 round_trippers.go:580]     Audit-Id: 86fee2cd-d6de-4823-85fb-f29c06e30e96
	I1028 12:38:01.642378    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.642378    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.642378    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.642378    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.642378    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.642686    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tgw89","generateName":"kube-proxy-","namespace":"kube-system","uid":"fe651213-d8ad-43ae-b151-dd8ad6cd1e8d","resourceVersion":"381","creationTimestamp":"2024-10-28T12:37:35Z","labels":{"controller-revision-hash":"77987969cc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2e018c8b-485d-4a2a-bf11-b2a0153acdac","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e018c8b-485d-4a2a-bf11-b2a0153acdac\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6194 chars]
	I1028 12:38:01.836385    5536 request.go:632] Waited for 192.79ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:01.836385    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:01.836910    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:01.837007    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:01.837007    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:01.841263    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:38:01.841263    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:01.841364    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:01.841364    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:01.841364    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:01.841364    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:01 GMT
	I1028 12:38:01.841364    5536 round_trippers.go:580]     Audit-Id: 5a2e012c-f53e-4358-9839-c1517c54e5ad
	I1028 12:38:01.841364    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:01.841606    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"447","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1028 12:38:01.841606    5536 pod_ready.go:93] pod "kube-proxy-tgw89" in "kube-system" namespace has status "Ready":"True"
	I1028 12:38:01.841606    5536 pod_ready.go:82] duration metric: took 318.5641ms for pod "kube-proxy-tgw89" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:01.841606    5536 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-071500" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:02.037098    5536 request.go:632] Waited for 195.4897ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-071500
	I1028 12:38:02.037098    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-071500
	I1028 12:38:02.037098    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:02.037098    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:02.037098    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:02.041197    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:38:02.041197    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:02.041197    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:02.041197    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:02.041197    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:02.041197    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:02.041197    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:02 GMT
	I1028 12:38:02.041197    5536 round_trippers.go:580]     Audit-Id: 43d311f4-0aca-4b0f-9356-e74ca0e624ae
	I1028 12:38:02.042610    5536 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-071500","namespace":"kube-system","uid":"c7b70910-55e3-4e8d-a167-f30516fc8241","resourceVersion":"389","creationTimestamp":"2024-10-28T12:37:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ee6f7d791319591d1ac968147343724b","kubernetes.io/config.mirror":"ee6f7d791319591d1ac968147343724b","kubernetes.io/config.seen":"2024-10-28T12:37:30.614268714Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I1028 12:38:02.236391    5536 request.go:632] Waited for 193.1359ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:02.236391    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes/multinode-071500
	I1028 12:38:02.236391    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:02.236391    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:02.236391    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:02.240300    5536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 12:38:02.241062    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:02.241062    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:02.241062    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:02.241062    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:02.241062    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:02 GMT
	I1028 12:38:02.241062    5536 round_trippers.go:580]     Audit-Id: 4c274be6-0927-4541-8727-5995e02981bc
	I1028 12:38:02.241062    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:02.241595    5536 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"447","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-28T12:37:27Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1028 12:38:02.241595    5536 pod_ready.go:93] pod "kube-scheduler-multinode-071500" in "kube-system" namespace has status "Ready":"True"
	I1028 12:38:02.241595    5536 pod_ready.go:82] duration metric: took 399.9852ms for pod "kube-scheduler-multinode-071500" in "kube-system" namespace to be "Ready" ...
	I1028 12:38:02.241595    5536 pod_ready.go:39] duration metric: took 3.3175825s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:38:02.242142    5536 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:38:02.254777    5536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:38:02.286294    5536 command_runner.go:130] > 2177
	I1028 12:38:02.286431    5536 api_server.go:72] duration metric: took 25.8856364s to wait for apiserver process to appear ...
	I1028 12:38:02.286431    5536 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:38:02.286496    5536 api_server.go:253] Checking apiserver healthz at https://172.27.244.98:8443/healthz ...
	I1028 12:38:02.298825    5536 api_server.go:279] https://172.27.244.98:8443/healthz returned 200:
	ok
	I1028 12:38:02.299052    5536 round_trippers.go:463] GET https://172.27.244.98:8443/version
	I1028 12:38:02.299162    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:02.299162    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:02.299162    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:02.301504    5536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 12:38:02.301578    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:02.301578    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:02.301578    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:02.301578    5536 round_trippers.go:580]     Content-Length: 263
	I1028 12:38:02.301578    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:02 GMT
	I1028 12:38:02.301578    5536 round_trippers.go:580]     Audit-Id: 1a5cbb00-3b71-4056-8f12-536285df3a42
	I1028 12:38:02.301578    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:02.301578    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:02.301578    5536 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.2",
	  "gitCommit": "5864a4677267e6adeae276ad85882a8714d69d9d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-10-22T20:28:14Z",
	  "goVersion": "go1.22.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1028 12:38:02.301578    5536 api_server.go:141] control plane version: v1.31.2
	I1028 12:38:02.301578    5536 api_server.go:131] duration metric: took 15.1461ms to wait for apiserver health ...
	I1028 12:38:02.301578    5536 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:38:02.437412    5536 request.go:632] Waited for 135.8326ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods
	I1028 12:38:02.437412    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods
	I1028 12:38:02.437412    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:02.437412    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:02.437412    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:02.445416    5536 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 12:38:02.445416    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:02.445416    5536 round_trippers.go:580]     Audit-Id: 1a1f4e6c-2dc4-4c66-9a23-d40fcfa0c669
	I1028 12:38:02.445416    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:02.445416    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:02.445416    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:02.445416    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:02.445416    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:02 GMT
	I1028 12:38:02.447593    5536 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"443","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65755 chars]
	I1028 12:38:02.451117    5536 system_pods.go:59] 9 kube-system pods found
	I1028 12:38:02.451167    5536 system_pods.go:61] "coredns-7c65d6cfc9-j8vdn" [72f8f3d0-e08c-44f1-8f74-6f5685c5bf75] Running
	I1028 12:38:02.451199    5536 system_pods.go:61] "coredns-7c65d6cfc9-w5gxr" [2c6bd789-f3d5-4c3c-9f69-6e5dbe1db852] Running
	I1028 12:38:02.451199    5536 system_pods.go:61] "etcd-multinode-071500" [0def4362-6242-450b-a917-ea0720c76929] Running
	I1028 12:38:02.451199    5536 system_pods.go:61] "kindnet-c7z7c" [9151b032-96d2-40e4-b4e6-6bac4ccb5180] Running
	I1028 12:38:02.451199    5536 system_pods.go:61] "kube-apiserver-multinode-071500" [0216da7d-e0eb-403f-927d-5bcd780c85bb] Running
	I1028 12:38:02.451199    5536 system_pods.go:61] "kube-controller-manager-multinode-071500" [f4f02743-40df-46cc-b3bf-39b846325812] Running
	I1028 12:38:02.451199    5536 system_pods.go:61] "kube-proxy-tgw89" [fe651213-d8ad-43ae-b151-dd8ad6cd1e8d] Running
	I1028 12:38:02.451199    5536 system_pods.go:61] "kube-scheduler-multinode-071500" [c7b70910-55e3-4e8d-a167-f30516fc8241] Running
	I1028 12:38:02.451199    5536 system_pods.go:61] "storage-provisioner" [3041ff50-d6af-4c68-803f-78a36f22c000] Running
	I1028 12:38:02.451199    5536 system_pods.go:74] duration metric: took 149.6197ms to wait for pod list to return data ...
	I1028 12:38:02.451304    5536 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:38:02.637105    5536 request.go:632] Waited for 185.6853ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.244.98:8443/api/v1/namespaces/default/serviceaccounts
	I1028 12:38:02.637105    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/default/serviceaccounts
	I1028 12:38:02.637105    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:02.637105    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:02.637105    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:02.643330    5536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 12:38:02.643330    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:02.643330    5536 round_trippers.go:580]     Content-Length: 261
	I1028 12:38:02.643414    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:02 GMT
	I1028 12:38:02.643414    5536 round_trippers.go:580]     Audit-Id: a99a47ba-cfa3-4bec-a954-5e60cb817dce
	I1028 12:38:02.643414    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:02.643414    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:02.643414    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:02.643414    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:02.643414    5536 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"576087d0-542d-474e-b0d9-e730f2067d88","resourceVersion":"355","creationTimestamp":"2024-10-28T12:37:35Z"}}]}
	I1028 12:38:02.643923    5536 default_sa.go:45] found service account: "default"
	I1028 12:38:02.644001    5536 default_sa.go:55] duration metric: took 192.6947ms for default service account to be created ...
	I1028 12:38:02.644001    5536 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:38:02.836303    5536 request.go:632] Waited for 192.1699ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods
	I1028 12:38:02.836303    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/namespaces/kube-system/pods
	I1028 12:38:02.836303    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:02.836303    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:02.836303    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:02.841615    5536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 12:38:02.841615    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:02.841615    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:02.841615    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:02.841615    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:02.841615    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:02.841615    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:02 GMT
	I1028 12:38:02.841615    5536 round_trippers.go:580]     Audit-Id: dcf84d68-734d-42ef-8426-f87cf5c832fc
	I1028 12:38:02.842955    5536 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-j8vdn","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75","resourceVersion":"443","creationTimestamp":"2024-10-28T12:37:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"19c0a0bf-94e7-4c95-aac9-c799f0d3848b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T12:37:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19c0a0bf-94e7-4c95-aac9-c799f0d3848b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65755 chars]
	I1028 12:38:02.846144    5536 system_pods.go:86] 9 kube-system pods found
	I1028 12:38:02.846217    5536 system_pods.go:89] "coredns-7c65d6cfc9-j8vdn" [72f8f3d0-e08c-44f1-8f74-6f5685c5bf75] Running
	I1028 12:38:02.846217    5536 system_pods.go:89] "coredns-7c65d6cfc9-w5gxr" [2c6bd789-f3d5-4c3c-9f69-6e5dbe1db852] Running
	I1028 12:38:02.846217    5536 system_pods.go:89] "etcd-multinode-071500" [0def4362-6242-450b-a917-ea0720c76929] Running
	I1028 12:38:02.846217    5536 system_pods.go:89] "kindnet-c7z7c" [9151b032-96d2-40e4-b4e6-6bac4ccb5180] Running
	I1028 12:38:02.846217    5536 system_pods.go:89] "kube-apiserver-multinode-071500" [0216da7d-e0eb-403f-927d-5bcd780c85bb] Running
	I1028 12:38:02.846217    5536 system_pods.go:89] "kube-controller-manager-multinode-071500" [f4f02743-40df-46cc-b3bf-39b846325812] Running
	I1028 12:38:02.846290    5536 system_pods.go:89] "kube-proxy-tgw89" [fe651213-d8ad-43ae-b151-dd8ad6cd1e8d] Running
	I1028 12:38:02.846290    5536 system_pods.go:89] "kube-scheduler-multinode-071500" [c7b70910-55e3-4e8d-a167-f30516fc8241] Running
	I1028 12:38:02.846290    5536 system_pods.go:89] "storage-provisioner" [3041ff50-d6af-4c68-803f-78a36f22c000] Running
	I1028 12:38:02.846290    5536 system_pods.go:126] duration metric: took 202.2873ms to wait for k8s-apps to be running ...
	I1028 12:38:02.846290    5536 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:38:02.856840    5536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:38:02.887240    5536 system_svc.go:56] duration metric: took 40.8164ms WaitForService to wait for kubelet
	I1028 12:38:02.887240    5536 kubeadm.go:582] duration metric: took 26.4864384s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:38:02.887240    5536 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:38:03.037033    5536 request.go:632] Waited for 149.7908ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.244.98:8443/api/v1/nodes
	I1028 12:38:03.037033    5536 round_trippers.go:463] GET https://172.27.244.98:8443/api/v1/nodes
	I1028 12:38:03.037033    5536 round_trippers.go:469] Request Headers:
	I1028 12:38:03.037033    5536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 12:38:03.037033    5536 round_trippers.go:473]     Accept: application/json, */*
	I1028 12:38:03.044866    5536 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 12:38:03.044913    5536 round_trippers.go:577] Response Headers:
	I1028 12:38:03.044913    5536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 884a10f5-4460-4950-ab4c-acfde97ec06f
	I1028 12:38:03.044913    5536 round_trippers.go:580]     Date: Mon, 28 Oct 2024 12:38:03 GMT
	I1028 12:38:03.044913    5536 round_trippers.go:580]     Audit-Id: 1719c4ae-81ee-476f-9764-d99cf06029f3
	I1028 12:38:03.044913    5536 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 12:38:03.044913    5536 round_trippers.go:580]     Content-Type: application/json
	I1028 12:38:03.044913    5536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 915cd059-0300-41e0-9ff6-134fbc3027e4
	I1028 12:38:03.045746    5536 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"multinode-071500","uid":"a781d003-713c-401b-84cb-a69301a8dd38","resourceVersion":"447","creationTimestamp":"2024-10-28T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-071500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"605803b196d1455ad0982199aad6722d11920536","minikube.k8s.io/name":"multinode-071500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T12_37_32_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I1028 12:38:03.046323    5536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:38:03.046398    5536 node_conditions.go:123] node cpu capacity is 2
	I1028 12:38:03.046398    5536 node_conditions.go:105] duration metric: took 159.156ms to run NodePressure ...
	I1028 12:38:03.046470    5536 start.go:241] waiting for startup goroutines ...
	I1028 12:38:03.046470    5536 start.go:246] waiting for cluster config update ...
	I1028 12:38:03.046470    5536 start.go:255] writing updated cluster config ...
	I1028 12:38:03.057918    5536 ssh_runner.go:195] Run: rm -f paused
	I1028 12:38:03.228333    5536 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:38:03.233952    5536 out.go:177] * Done! kubectl is now configured to use "multinode-071500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.324372202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.325471307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.416348959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.427572414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.427591214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.427715015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.430045527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.430105127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.430151227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.430264328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:37:59 multinode-071500 cri-dockerd[1325]: time="2024-10-28T12:37:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cc34e719069f7a9d34907f483d865421c54e40c613bc542e28e61534eddf3683/resolv.conf as [nameserver 172.27.240.1]"
	Oct 28 12:37:59 multinode-071500 cri-dockerd[1325]: time="2024-10-28T12:37:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6591463df0d1e459073bfa0e55c5eb78f4168eb6c4c84321aaf32c66f9f7a546/resolv.conf as [nameserver 172.27.240.1]"
	Oct 28 12:37:59 multinode-071500 cri-dockerd[1325]: time="2024-10-28T12:37:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4aa32a83149a0ff38b11c6d6629fb377d8be7307560fb270d10f9fd319ab26cf/resolv.conf as [nameserver 172.27.240.1]"
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.939179517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.939436319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.939464620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:37:59 multinode-071500 dockerd[1434]: time="2024-10-28T12:37:59.939580521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:38:00 multinode-071500 dockerd[1434]: time="2024-10-28T12:38:00.053257324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 12:38:00 multinode-071500 dockerd[1434]: time="2024-10-28T12:38:00.053484527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 12:38:00 multinode-071500 dockerd[1434]: time="2024-10-28T12:38:00.053522827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:38:00 multinode-071500 dockerd[1434]: time="2024-10-28T12:38:00.053662128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:38:00 multinode-071500 dockerd[1434]: time="2024-10-28T12:38:00.090507298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 12:38:00 multinode-071500 dockerd[1434]: time="2024-10-28T12:38:00.091069303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 12:38:00 multinode-071500 dockerd[1434]: time="2024-10-28T12:38:00.091418407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 12:38:00 multinode-071500 dockerd[1434]: time="2024-10-28T12:38:00.096615559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d9a4ff27464a9       c69fa2e9cbf5f                                                                              About a minute ago   Running             coredns                   0                   4aa32a83149a0       coredns-7c65d6cfc9-w5gxr
	68b45017566f4       6e38f40d628db                                                                              About a minute ago   Running             storage-provisioner       0                   6591463df0d1e       storage-provisioner
	f3f03e6599ba5       c69fa2e9cbf5f                                                                              About a minute ago   Running             coredns                   0                   cc34e719069f7       coredns-7c65d6cfc9-j8vdn
	65e0fb44dec2a       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387   About a minute ago   Running             kindnet-cni               0                   8092d681a7fcf       kindnet-c7z7c
	7869c78851adc       505d571f5fd56                                                                              About a minute ago   Running             kube-proxy                0                   5bd6b76949e46       kube-proxy-tgw89
	194625c1d055f       9499c9960544e                                                                              About a minute ago   Running             kube-apiserver            0                   255a07694cdd3       kube-apiserver-multinode-071500
	dd6a29921aeb0       0486b6c53a1b5                                                                              About a minute ago   Running             kube-controller-manager   0                   e5a544d2ba02d       kube-controller-manager-multinode-071500
	e4b9f1d00646c       2e96e5913fc06                                                                              About a minute ago   Running             etcd                      0                   3741c4710b9e1       etcd-multinode-071500
	2f85a96248571       847c7bc1a5418                                                                              About a minute ago   Running             kube-scheduler            0                   5bc248af891ae       kube-scheduler-multinode-071500
	
	
	==> coredns [d9a4ff27464a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f3f03e6599ba] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               multinode-071500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-071500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=multinode-071500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T12_37_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 12:37:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-071500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 12:39:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 12:38:01 +0000   Mon, 28 Oct 2024 12:37:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 12:38:01 +0000   Mon, 28 Oct 2024 12:37:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 12:38:01 +0000   Mon, 28 Oct 2024 12:37:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 12:38:01 +0000   Mon, 28 Oct 2024 12:37:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.244.98
	  Hostname:    multinode-071500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 185d9c6ed47f4a5096378807b9fa20dc
	  System UUID:                01909705-6ec2-2e4c-a584-38b558b009f0
	  Boot ID:                    68ba9dca-d12b-4823-946f-4b1508951028
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-j8vdn                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     105s
	  kube-system                 coredns-7c65d6cfc9-w5gxr                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     105s
	  kube-system                 etcd-multinode-071500                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         111s
	  kube-system                 kindnet-c7z7c                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      106s
	  kube-system                 kube-apiserver-multinode-071500             250m (12%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-multinode-071500    200m (10%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-tgw89                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-multinode-071500             100m (5%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  119s (x8 over 119s)  kubelet          Node multinode-071500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s (x8 over 119s)  kubelet          Node multinode-071500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s (x7 over 119s)  kubelet          Node multinode-071500 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  111s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  111s                 kubelet          Node multinode-071500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s                 kubelet          Node multinode-071500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s                 kubelet          Node multinode-071500 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           106s                 node-controller  Node multinode-071500 event: Registered Node multinode-071500 in Controller
	  Normal  NodeReady                83s                  kubelet          Node multinode-071500 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.909337] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct28 12:36] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.218831] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[ +27.387805] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.118489] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.568429] systemd-fstab-generator[1039]: Ignoring "noauto" option for root device
	[  +0.231942] systemd-fstab-generator[1051]: Ignoring "noauto" option for root device
	[  +0.243355] systemd-fstab-generator[1065]: Ignoring "noauto" option for root device
	[  +2.893672] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.207419] systemd-fstab-generator[1290]: Ignoring "noauto" option for root device
	[  +0.208207] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.281567] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[Oct28 12:37] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +0.115357] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.159710] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +7.389431] systemd-fstab-generator[1830]: Ignoring "noauto" option for root device
	[  +0.127450] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.575684] systemd-fstab-generator[2240]: Ignoring "noauto" option for root device
	[  +0.168990] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.191449] systemd-fstab-generator[2346]: Ignoring "noauto" option for root device
	[  +0.118019] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.381667] hrtimer: interrupt took 2384119 ns
	[  +0.687538] kauditd_printk_skb: 51 callbacks suppressed
	
	
	==> etcd [e4b9f1d00646] <==
	{"level":"info","ts":"2024-10-28T12:37:24.491535Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"9ff7ddd6e6d528cf","initial-advertise-peer-urls":["https://172.27.244.98:2380"],"listen-peer-urls":["https://172.27.244.98:2380"],"advertise-client-urls":["https://172.27.244.98:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.244.98:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T12:37:24.491608Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T12:37:25.037915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ff7ddd6e6d528cf is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-28T12:37:25.038166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ff7ddd6e6d528cf became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-28T12:37:25.038389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ff7ddd6e6d528cf received MsgPreVoteResp from 9ff7ddd6e6d528cf at term 1"}
	{"level":"info","ts":"2024-10-28T12:37:25.038589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ff7ddd6e6d528cf became candidate at term 2"}
	{"level":"info","ts":"2024-10-28T12:37:25.040847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ff7ddd6e6d528cf received MsgVoteResp from 9ff7ddd6e6d528cf at term 2"}
	{"level":"info","ts":"2024-10-28T12:37:25.041057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ff7ddd6e6d528cf became leader at term 2"}
	{"level":"info","ts":"2024-10-28T12:37:25.041267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9ff7ddd6e6d528cf elected leader 9ff7ddd6e6d528cf at term 2"}
	{"level":"info","ts":"2024-10-28T12:37:25.048049Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:37:25.055113Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9ff7ddd6e6d528cf","local-member-attributes":"{Name:multinode-071500 ClientURLs:[https://172.27.244.98:2379]}","request-path":"/0/members/9ff7ddd6e6d528cf/attributes","cluster-id":"9b78066306349c95","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T12:37:25.055172Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:37:25.055713Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:37:25.058926Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T12:37:25.059121Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T12:37:25.059355Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9b78066306349c95","local-member-id":"9ff7ddd6e6d528cf","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:37:25.059793Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:37:25.060079Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:37:25.061757Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:37:25.066036Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.244.98:2379"}
	{"level":"info","ts":"2024-10-28T12:37:25.066377Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:37:25.067646Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T12:37:39.319261Z","caller":"traceutil/trace.go:171","msg":"trace[2019660784] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"110.373943ms","start":"2024-10-28T12:37:39.208864Z","end":"2024-10-28T12:37:39.319238Z","steps":["trace[2019660784] 'process raft request'  (duration: 109.765438ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:37:44.149664Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.822975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-071500\" ","response":"range_response_count:1 size:4487"}
	{"level":"info","ts":"2024-10-28T12:37:44.149771Z","caller":"traceutil/trace.go:171","msg":"trace[1360376525] range","detail":"{range_begin:/registry/minions/multinode-071500; range_end:; response_count:1; response_revision:392; }","duration":"214.010677ms","start":"2024-10-28T12:37:43.935737Z","end":"2024-10-28T12:37:44.149747Z","steps":["trace[1360376525] 'range keys from in-memory index tree'  (duration: 213.725475ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:39:22 up 4 min,  0 users,  load average: 0.26, 0.29, 0.12
	Linux multinode-071500 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [65e0fb44dec2] <==
	I1028 12:37:45.151435       1 main.go:148] setting mtu 1500 for CNI 
	I1028 12:37:45.151636       1 main.go:178] kindnetd IP family: "ipv4"
	I1028 12:37:45.151781       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1028 12:37:46.148296       1 main.go:238] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	I1028 12:37:56.157729       1 main.go:296] Handling node with IPs: map[172.27.244.98:{}]
	I1028 12:37:56.158083       1 main.go:300] handling current node
	I1028 12:38:06.150178       1 main.go:296] Handling node with IPs: map[172.27.244.98:{}]
	I1028 12:38:06.150240       1 main.go:300] handling current node
	I1028 12:38:16.160005       1 main.go:296] Handling node with IPs: map[172.27.244.98:{}]
	I1028 12:38:16.160193       1 main.go:300] handling current node
	I1028 12:38:26.157967       1 main.go:296] Handling node with IPs: map[172.27.244.98:{}]
	I1028 12:38:26.158161       1 main.go:300] handling current node
	I1028 12:38:36.150303       1 main.go:296] Handling node with IPs: map[172.27.244.98:{}]
	I1028 12:38:36.150535       1 main.go:300] handling current node
	I1028 12:38:46.149146       1 main.go:296] Handling node with IPs: map[172.27.244.98:{}]
	I1028 12:38:46.149212       1 main.go:300] handling current node
	I1028 12:38:56.150435       1 main.go:296] Handling node with IPs: map[172.27.244.98:{}]
	I1028 12:38:56.150493       1 main.go:300] handling current node
	I1028 12:39:06.157624       1 main.go:296] Handling node with IPs: map[172.27.244.98:{}]
	I1028 12:39:06.158216       1 main.go:300] handling current node
	I1028 12:39:16.149489       1 main.go:296] Handling node with IPs: map[172.27.244.98:{}]
	I1028 12:39:16.149551       1 main.go:300] handling current node
	
	
	==> kube-apiserver [194625c1d055] <==
	I1028 12:37:27.409594       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1028 12:37:27.409958       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1028 12:37:27.409999       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1028 12:37:27.416609       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E1028 12:37:27.416672       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1028 12:37:27.417145       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1028 12:37:27.418605       1 policy_source.go:224] refreshing policies
	I1028 12:37:27.456538       1 controller.go:615] quota admission added evaluator for: namespaces
	E1028 12:37:27.518964       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1028 12:37:27.626496       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1028 12:37:28.218679       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1028 12:37:28.228344       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1028 12:37:28.228380       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1028 12:37:29.470521       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1028 12:37:29.570087       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1028 12:37:29.744451       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1028 12:37:29.786026       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.244.98]
	I1028 12:37:29.787419       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 12:37:29.814014       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1028 12:37:30.298577       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 12:37:30.552293       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 12:37:30.605996       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1028 12:37:30.658631       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 12:37:35.697225       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1028 12:37:35.998791       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [dd6a29921aeb] <==
	I1028 12:37:35.248080       1 shared_informer.go:320] Caches are synced for disruption
	I1028 12:37:35.251696       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 12:37:35.261782       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 12:37:35.299486       1 shared_informer.go:320] Caches are synced for stateful set
	I1028 12:37:35.726720       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 12:37:35.746311       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 12:37:35.746354       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1028 12:37:35.949740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-071500"
	I1028 12:37:36.330028       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="319.843045ms"
	I1028 12:37:36.354703       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="24.458863ms"
	I1028 12:37:36.355505       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="106.401µs"
	I1028 12:37:58.618694       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-071500"
	I1028 12:37:58.635443       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-071500"
	I1028 12:37:58.658482       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="1.451007ms"
	I1028 12:37:58.662895       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="37.3µs"
	I1028 12:37:58.694729       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="314.501µs"
	I1028 12:37:58.733963       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="180.401µs"
	I1028 12:38:00.049108       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1028 12:38:00.331093       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.2µs"
	I1028 12:38:00.394237       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.401µs"
	I1028 12:38:01.394724       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="31.600616ms"
	I1028 12:38:01.395429       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="115.901µs"
	I1028 12:38:01.403025       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-071500"
	I1028 12:38:01.490155       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="41.97842ms"
	I1028 12:38:01.490306       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.7µs"
	
	
	==> kube-proxy [7869c78851ad] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 12:37:37.400456       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 12:37:37.425871       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.27.244.98"]
	E1028 12:37:37.426380       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 12:37:37.529412       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 12:37:37.529546       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 12:37:37.529582       1 server_linux.go:169] "Using iptables Proxier"
	I1028 12:37:37.534266       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 12:37:37.535064       1 server.go:483] "Version info" version="v1.31.2"
	I1028 12:37:37.535866       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:37:37.538339       1 config.go:199] "Starting service config controller"
	I1028 12:37:37.538550       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 12:37:37.538918       1 config.go:105] "Starting endpoint slice config controller"
	I1028 12:37:37.539135       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 12:37:37.542601       1 config.go:328] "Starting node config controller"
	I1028 12:37:37.542941       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 12:37:37.559096       1 shared_informer.go:320] Caches are synced for node config
	I1028 12:37:37.639490       1 shared_informer.go:320] Caches are synced for service config
	I1028 12:37:37.639595       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2f85a9624857] <==
	W1028 12:37:28.496466       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 12:37:28.498488       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 12:37:28.522186       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 12:37:28.522543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.552203       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 12:37:28.552241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.580607       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 12:37:28.580729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.680181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 12:37:28.682510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.711501       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 12:37:28.711878       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.712889       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 12:37:28.713975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.755348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 12:37:28.755415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.821039       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 12:37:28.821478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.906662       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 12:37:28.907605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:28.932186       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 12:37:28.932240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:37:29.010168       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 12:37:29.010492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1028 12:37:31.101434       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.791926    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9151b032-96d2-40e4-b4e6-6bac4ccb5180-lib-modules\") pod \"kindnet-c7z7c\" (UID: \"9151b032-96d2-40e4-b4e6-6bac4ccb5180\") " pod="kube-system/kindnet-c7z7c"
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.792006    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe651213-d8ad-43ae-b151-dd8ad6cd1e8d-lib-modules\") pod \"kube-proxy-tgw89\" (UID: \"fe651213-d8ad-43ae-b151-dd8ad6cd1e8d\") " pod="kube-system/kube-proxy-tgw89"
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.792106    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9151b032-96d2-40e4-b4e6-6bac4ccb5180-cni-cfg\") pod \"kindnet-c7z7c\" (UID: \"9151b032-96d2-40e4-b4e6-6bac4ccb5180\") " pod="kube-system/kindnet-c7z7c"
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.792197    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9151b032-96d2-40e4-b4e6-6bac4ccb5180-xtables-lock\") pod \"kindnet-c7z7c\" (UID: \"9151b032-96d2-40e4-b4e6-6bac4ccb5180\") " pod="kube-system/kindnet-c7z7c"
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.792291    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdnsz\" (UniqueName: \"kubernetes.io/projected/9151b032-96d2-40e4-b4e6-6bac4ccb5180-kube-api-access-cdnsz\") pod \"kindnet-c7z7c\" (UID: \"9151b032-96d2-40e4-b4e6-6bac4ccb5180\") " pod="kube-system/kindnet-c7z7c"
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.792396    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2694\" (UniqueName: \"kubernetes.io/projected/fe651213-d8ad-43ae-b151-dd8ad6cd1e8d-kube-api-access-s2694\") pod \"kube-proxy-tgw89\" (UID: \"fe651213-d8ad-43ae-b151-dd8ad6cd1e8d\") " pod="kube-system/kube-proxy-tgw89"
	Oct 28 12:37:35 multinode-071500 kubelet[2247]: I1028 12:37:35.938541    2247 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 28 12:37:37 multinode-071500 kubelet[2247]: I1028 12:37:37.394913    2247 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8092d681a7fcf95fd4abec34e0c8aae511804c8a70366790b3a66de9aba99cd7"
	Oct 28 12:37:37 multinode-071500 kubelet[2247]: I1028 12:37:37.505463    2247 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tgw89" podStartSLOduration=2.505429779 podStartE2EDuration="2.505429779s" podCreationTimestamp="2024-10-28 12:37:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-28 12:37:37.460960121 +0000 UTC m=+7.020771401" watchObservedRunningTime="2024-10-28 12:37:37.505429779 +0000 UTC m=+7.065241059"
	Oct 28 12:37:58 multinode-071500 kubelet[2247]: I1028 12:37:58.596806    2247 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Oct 28 12:37:58 multinode-071500 kubelet[2247]: I1028 12:37:58.652971    2247 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-c7z7c" podStartSLOduration=17.178162282 podStartE2EDuration="23.652951227s" podCreationTimestamp="2024-10-28 12:37:35 +0000 UTC" firstStartedPulling="2024-10-28 12:37:37.405135046 +0000 UTC m=+6.964946226" lastFinishedPulling="2024-10-28 12:37:43.879923991 +0000 UTC m=+13.439735171" observedRunningTime="2024-10-28 12:37:45.851499758 +0000 UTC m=+15.411311038" watchObservedRunningTime="2024-10-28 12:37:58.652951227 +0000 UTC m=+28.212762407"
	Oct 28 12:37:58 multinode-071500 kubelet[2247]: I1028 12:37:58.796463    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c6bd789-f3d5-4c3c-9f69-6e5dbe1db852-config-volume\") pod \"coredns-7c65d6cfc9-w5gxr\" (UID: \"2c6bd789-f3d5-4c3c-9f69-6e5dbe1db852\") " pod="kube-system/coredns-7c65d6cfc9-w5gxr"
	Oct 28 12:37:58 multinode-071500 kubelet[2247]: I1028 12:37:58.796621    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72f8f3d0-e08c-44f1-8f74-6f5685c5bf75-config-volume\") pod \"coredns-7c65d6cfc9-j8vdn\" (UID: \"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75\") " pod="kube-system/coredns-7c65d6cfc9-j8vdn"
	Oct 28 12:37:58 multinode-071500 kubelet[2247]: I1028 12:37:58.796652    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svmhf\" (UniqueName: \"kubernetes.io/projected/72f8f3d0-e08c-44f1-8f74-6f5685c5bf75-kube-api-access-svmhf\") pod \"coredns-7c65d6cfc9-j8vdn\" (UID: \"72f8f3d0-e08c-44f1-8f74-6f5685c5bf75\") " pod="kube-system/coredns-7c65d6cfc9-j8vdn"
	Oct 28 12:37:58 multinode-071500 kubelet[2247]: I1028 12:37:58.796689    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7sk5\" (UniqueName: \"kubernetes.io/projected/2c6bd789-f3d5-4c3c-9f69-6e5dbe1db852-kube-api-access-j7sk5\") pod \"coredns-7c65d6cfc9-w5gxr\" (UID: \"2c6bd789-f3d5-4c3c-9f69-6e5dbe1db852\") " pod="kube-system/coredns-7c65d6cfc9-w5gxr"
	Oct 28 12:37:58 multinode-071500 kubelet[2247]: I1028 12:37:58.796720    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3041ff50-d6af-4c68-803f-78a36f22c000-tmp\") pod \"storage-provisioner\" (UID: \"3041ff50-d6af-4c68-803f-78a36f22c000\") " pod="kube-system/storage-provisioner"
	Oct 28 12:37:58 multinode-071500 kubelet[2247]: I1028 12:37:58.796744    2247 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqrsx\" (UniqueName: \"kubernetes.io/projected/3041ff50-d6af-4c68-803f-78a36f22c000-kube-api-access-hqrsx\") pod \"storage-provisioner\" (UID: \"3041ff50-d6af-4c68-803f-78a36f22c000\") " pod="kube-system/storage-provisioner"
	Oct 28 12:38:00 multinode-071500 kubelet[2247]: I1028 12:38:00.391928    2247 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-j8vdn" podStartSLOduration=24.391903218 podStartE2EDuration="24.391903218s" podCreationTimestamp="2024-10-28 12:37:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-28 12:38:00.390000399 +0000 UTC m=+29.949811679" watchObservedRunningTime="2024-10-28 12:38:00.391903218 +0000 UTC m=+29.951714498"
	Oct 28 12:38:00 multinode-071500 kubelet[2247]: I1028 12:38:00.392210    2247 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-w5gxr" podStartSLOduration=24.392199321 podStartE2EDuration="24.392199321s" podCreationTimestamp="2024-10-28 12:37:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-28 12:38:00.338172679 +0000 UTC m=+29.897983959" watchObservedRunningTime="2024-10-28 12:38:00.392199321 +0000 UTC m=+29.952010601"
	Oct 28 12:38:01 multinode-071500 kubelet[2247]: I1028 12:38:01.446619    2247 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.44659458 podStartE2EDuration="17.44659458s" podCreationTimestamp="2024-10-28 12:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-28 12:38:01.415678471 +0000 UTC m=+30.975489651" watchObservedRunningTime="2024-10-28 12:38:01.44659458 +0000 UTC m=+31.006405760"
	Oct 28 12:38:30 multinode-071500 kubelet[2247]: E1028 12:38:30.799128    2247 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 12:38:30 multinode-071500 kubelet[2247]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 12:38:30 multinode-071500 kubelet[2247]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 12:38:30 multinode-071500 kubelet[2247]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 12:38:30 multinode-071500 kubelet[2247]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [68b45017566f] <==
	I1028 12:38:00.241425       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 12:38:00.376991       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 12:38:00.377366       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 12:38:00.399349       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 12:38:00.399519       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-071500_88ce8c2c-61c9-4092-b5bf-02d51fbf660c!
	I1028 12:38:00.401349       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1ccdd4c9-5c6b-4ada-bb8b-34eeb18cb932", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-071500_88ce8c2c-61c9-4092-b5bf-02d51fbf660c became leader
	I1028 12:38:00.500704       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-071500_88ce8c2c-61c9-4092-b5bf-02d51fbf660c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-071500 -n multinode-071500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-071500 -n multinode-071500: (12.8509084s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-071500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (56.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (48.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-071500 stop
E1028 12:39:45.576400    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-071500 stop: (41.0074678s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-071500 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-071500 status: exit status 7 (2.6351949s)

                                                
                                                
-- stdout --
	multinode-071500
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-071500 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-071500 status --alsologtostderr: exit status 7 (2.6185278s)

                                                
                                                
-- stdout --
	multinode-071500
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:40:20.157820    9508 out.go:345] Setting OutFile to fd 1808 ...
	I1028 12:40:20.284295    9508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:40:20.284295    9508 out.go:358] Setting ErrFile to fd 1584...
	I1028 12:40:20.284295    9508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:40:20.304326    9508 out.go:352] Setting JSON to false
	I1028 12:40:20.304326    9508 mustload.go:65] Loading cluster: multinode-071500
	I1028 12:40:20.304326    9508 notify.go:220] Checking for updates...
	I1028 12:40:20.304326    9508 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:40:20.304326    9508 status.go:174] checking status of multinode-071500 ...
	I1028 12:40:20.304326    9508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:40:22.607592    9508 main.go:141] libmachine: [stdout =====>] : Off
	
	I1028 12:40:22.608527    9508 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:40:22.608527    9508 status.go:371] multinode-071500 host status = "Stopped" (err=<nil>)
	I1028 12:40:22.608527    9508 status.go:384] host is not running, skipping remaining checks
	I1028 12:40:22.608645    9508 status.go:176] multinode-071500 status: &{Name:multinode-071500 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-windows-amd64.exe -p multinode-071500 status --alsologtostderr": multinode-071500
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-071500 status --alsologtostderr": multinode-071500
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500: exit status 7 (2.5634731s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-071500" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (48.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (194.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-071500 --wait=true -v=8 --alsologtostderr --driver=hyperv
E1028 12:41:22.819215    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:41:39.730653    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-071500 --wait=true -v=8 --alsologtostderr --driver=hyperv: exit status 90 (3m2.0191593s)

                                                
                                                
-- stdout --
	* [multinode-071500] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-071500" primary control-plane node in "multinode-071500" cluster
	* Restarting existing hyperv VM for "multinode-071500" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:40:25.330648   12604 out.go:345] Setting OutFile to fd 2008 ...
	I1028 12:40:25.413234   12604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:40:25.413380   12604 out.go:358] Setting ErrFile to fd 1704...
	I1028 12:40:25.413380   12604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:40:25.436751   12604 out.go:352] Setting JSON to false
	I1028 12:40:25.442113   12604 start.go:129] hostinfo: {"hostname":"minikube6","uptime":167050,"bootTime":1729952174,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W1028 12:40:25.442113   12604 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 12:40:25.448296   12604 out.go:177] * [multinode-071500] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1028 12:40:25.450763   12604 notify.go:220] Checking for updates...
	I1028 12:40:25.452849   12604 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 12:40:25.455399   12604 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:40:25.458332   12604 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I1028 12:40:25.460555   12604 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:40:25.463768   12604 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:40:25.467462   12604 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:40:25.469150   12604 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:40:31.201463   12604 out.go:177] * Using the hyperv driver based on existing profile
	I1028 12:40:31.205504   12604 start.go:297] selected driver: hyperv
	I1028 12:40:31.205504   12604 start.go:901] validating driver "hyperv" against &{Name:multinode-071500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.2 ClusterName:multinode-071500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.244.98 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:40:31.206176   12604 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:40:31.261111   12604 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:40:31.261111   12604 cni.go:84] Creating CNI manager for ""
	I1028 12:40:31.261111   12604 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 12:40:31.261707   12604 start.go:340] cluster config:
	{Name:multinode-071500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-071500 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.244.98 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:40:31.261707   12604 iso.go:125] acquiring lock: {Name:mk92685f18db3b9b8a4c28d2a00efac35439147d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:40:31.266917   12604 out.go:177] * Starting "multinode-071500" primary control-plane node in "multinode-071500" cluster
	I1028 12:40:31.269054   12604 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 12:40:31.269612   12604 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1028 12:40:31.269612   12604 cache.go:56] Caching tarball of preloaded images
	I1028 12:40:31.269612   12604 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1028 12:40:31.270281   12604 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 12:40:31.270469   12604 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\config.json ...
	I1028 12:40:31.273212   12604 start.go:360] acquireMachinesLock for multinode-071500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:40:31.273212   12604 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-071500"
	I1028 12:40:31.273212   12604 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:40:31.273212   12604 fix.go:54] fixHost starting: 
	I1028 12:40:31.274252   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:40:34.035041   12604 main.go:141] libmachine: [stdout =====>] : Off
	
	I1028 12:40:34.035041   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:40:34.035041   12604 fix.go:112] recreateIfNeeded on multinode-071500: state=Stopped err=<nil>
	W1028 12:40:34.035041   12604 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:40:34.038071   12604 out.go:177] * Restarting existing hyperv VM for "multinode-071500" ...
	I1028 12:40:34.040327   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-071500
	I1028 12:40:37.327680   12604 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:40:37.327680   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:40:37.327680   12604 main.go:141] libmachine: Waiting for host to start...
	I1028 12:40:37.327680   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:40:39.670198   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:40:39.670198   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:40:39.670198   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:40:42.274725   12604 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:40:42.274893   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:40:43.275875   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:40:45.587956   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:40:45.588225   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:40:45.588335   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:40:48.224057   12604 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:40:48.225090   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:40:49.225536   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:40:51.509245   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:40:51.510240   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:40:51.510240   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:40:54.087732   12604 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:40:54.087732   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:40:55.088472   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:40:57.379523   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:40:57.380586   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:40:57.380586   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:40:59.979827   12604 main.go:141] libmachine: [stdout =====>] : 
	I1028 12:40:59.979827   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:00.980852   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:41:03.262290   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:41:03.262290   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:03.262290   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:41:05.938657   12604 main.go:141] libmachine: [stdout =====>] : 172.27.246.244
	
	I1028 12:41:05.938657   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:05.943166   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:41:08.184596   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:41:08.184755   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:08.184755   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:41:10.817989   12604 main.go:141] libmachine: [stdout =====>] : 172.27.246.244
	
	I1028 12:41:10.818896   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:10.819191   12604 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-071500\config.json ...
	I1028 12:41:10.821816   12604 machine.go:93] provisionDockerMachine start ...
	I1028 12:41:10.821894   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:41:13.034404   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:41:13.034459   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:13.034459   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:41:15.661247   12604 main.go:141] libmachine: [stdout =====>] : 172.27.246.244
	
	I1028 12:41:15.661247   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:15.667554   12604 main.go:141] libmachine: Using SSH client type: native
	I1028 12:41:15.668315   12604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.246.244 22 <nil> <nil>}
	I1028 12:41:15.668315   12604 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:41:15.790599   12604 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:41:15.790599   12604 buildroot.go:166] provisioning hostname "multinode-071500"
	I1028 12:41:15.790599   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:41:18.025081   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:41:18.025153   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:18.025299   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:41:20.684021   12604 main.go:141] libmachine: [stdout =====>] : 172.27.246.244
	
	I1028 12:41:20.684243   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:20.689640   12604 main.go:141] libmachine: Using SSH client type: native
	I1028 12:41:20.690560   12604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.246.244 22 <nil> <nil>}
	I1028 12:41:20.690560   12604 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-071500 && echo "multinode-071500" | sudo tee /etc/hostname
	I1028 12:41:20.836279   12604 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-071500
	
	I1028 12:41:20.836419   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:41:23.070784   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:41:23.070858   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:23.070858   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:41:25.693376   12604 main.go:141] libmachine: [stdout =====>] : 172.27.246.244
	
	I1028 12:41:25.693850   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:25.699537   12604 main.go:141] libmachine: Using SSH client type: native
	I1028 12:41:25.700120   12604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.246.244 22 <nil> <nil>}
	I1028 12:41:25.700120   12604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-071500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-071500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-071500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:41:25.841452   12604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:41:25.841452   12604 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I1028 12:41:25.841452   12604 buildroot.go:174] setting up certificates
	I1028 12:41:25.841452   12604 provision.go:84] configureAuth start
	I1028 12:41:25.841452   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:41:28.103728   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:41:28.103728   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:28.103728   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:41:30.743706   12604 main.go:141] libmachine: [stdout =====>] : 172.27.246.244
	
	I1028 12:41:30.743760   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:30.743760   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:41:32.947856   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:41:32.948862   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:32.949047   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:41:35.650566   12604 main.go:141] libmachine: [stdout =====>] : 172.27.246.244
	
	I1028 12:41:35.650566   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:35.650566   12604 provision.go:143] copyHostCerts
	I1028 12:41:35.650566   12604 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I1028 12:41:35.651196   12604 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I1028 12:41:35.651196   12604 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I1028 12:41:35.651264   12604 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1028 12:41:35.652467   12604 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I1028 12:41:35.653084   12604 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I1028 12:41:35.653084   12604 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I1028 12:41:35.653084   12604 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I1028 12:41:35.654442   12604 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I1028 12:41:35.654700   12604 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I1028 12:41:35.654700   12604 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I1028 12:41:35.655129   12604 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1028 12:41:35.655992   12604 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-071500 san=[127.0.0.1 172.27.246.244 localhost minikube multinode-071500]
	I1028 12:41:35.892373   12604 provision.go:177] copyRemoteCerts
	I1028 12:41:35.904619   12604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:41:35.904619   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:41:38.154914   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:41:38.154914   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:38.155555   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:41:40.777074   12604 main.go:141] libmachine: [stdout =====>] : 172.27.246.244
	
	I1028 12:41:40.777074   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:40.777293   12604 sshutil.go:53] new ssh client: &{IP:172.27.246.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:41:40.886339   12604 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9816628s)
	I1028 12:41:40.886461   12604 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1028 12:41:40.887029   12604 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:41:40.936595   12604 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1028 12:41:40.937163   12604 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1028 12:41:40.985346   12604 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1028 12:41:40.985490   12604 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:41:41.034710   12604 provision.go:87] duration metric: took 15.1930856s to configureAuth
	I1028 12:41:41.034710   12604 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:41:41.034710   12604 config.go:182] Loaded profile config "multinode-071500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:41:41.034710   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:41:43.286964   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:41:43.286993   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:43.287112   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:41:45.914593   12604 main.go:141] libmachine: [stdout =====>] : 172.27.246.244
	
	I1028 12:41:45.914726   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:45.920430   12604 main.go:141] libmachine: Using SSH client type: native
	I1028 12:41:45.921065   12604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.246.244 22 <nil> <nil>}
	I1028 12:41:45.921065   12604 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 12:41:46.049646   12604 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 12:41:46.049646   12604 buildroot.go:70] root file system type: tmpfs
	I1028 12:41:46.049646   12604 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 12:41:46.049646   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:41:48.288408   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:41:48.288408   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:48.288408   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:41:50.954573   12604 main.go:141] libmachine: [stdout =====>] : 172.27.246.244
	
	I1028 12:41:50.954573   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:50.961580   12604 main.go:141] libmachine: Using SSH client type: native
	I1028 12:41:50.962120   12604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.246.244 22 <nil> <nil>}
	I1028 12:41:50.962253   12604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 12:41:51.114827   12604 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 12:41:51.114827   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:41:53.343668   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:41:53.344376   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:53.344376   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:41:55.996154   12604 main.go:141] libmachine: [stdout =====>] : 172.27.246.244
	
	I1028 12:41:55.996154   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:41:56.002291   12604 main.go:141] libmachine: Using SSH client type: native
	I1028 12:41:56.002872   12604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.246.244 22 <nil> <nil>}
	I1028 12:41:56.002872   12604 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 12:41:58.555876   12604 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1028 12:41:58.555941   12604 machine.go:96] duration metric: took 47.7335832s to provisionDockerMachine
	I1028 12:41:58.556004   12604 start.go:293] postStartSetup for "multinode-071500" (driver="hyperv")
	I1028 12:41:58.556004   12604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:41:58.566474   12604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:41:58.566474   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:42:00.783767   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:42:00.783767   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:42:00.784546   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:42:03.469045   12604 main.go:141] libmachine: [stdout =====>] : 172.27.246.244
	
	I1028 12:42:03.469045   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:42:03.469591   12604 sshutil.go:53] new ssh client: &{IP:172.27.246.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:42:03.576651   12604 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0099583s)
	I1028 12:42:03.588934   12604 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:42:03.595485   12604 command_runner.go:130] > NAME=Buildroot
	I1028 12:42:03.595485   12604 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1028 12:42:03.595485   12604 command_runner.go:130] > ID=buildroot
	I1028 12:42:03.595485   12604 command_runner.go:130] > VERSION_ID=2023.02.9
	I1028 12:42:03.595485   12604 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1028 12:42:03.595630   12604 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:42:03.595671   12604 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I1028 12:42:03.595784   12604 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I1028 12:42:03.596564   12604 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> 96082.pem in /etc/ssl/certs
	I1028 12:42:03.596564   12604 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem -> /etc/ssl/certs/96082.pem
	I1028 12:42:03.609731   12604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:42:03.628182   12604 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96082.pem --> /etc/ssl/certs/96082.pem (1708 bytes)
	I1028 12:42:03.676120   12604 start.go:296] duration metric: took 5.120058s for postStartSetup
	I1028 12:42:03.676120   12604 fix.go:56] duration metric: took 1m32.4018604s for fixHost
	I1028 12:42:03.676396   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:42:05.966051   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:42:05.966051   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:42:05.966715   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:42:08.656288   12604 main.go:141] libmachine: [stdout =====>] : 172.27.246.244
	
	I1028 12:42:08.656288   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:42:08.661650   12604 main.go:141] libmachine: Using SSH client type: native
	I1028 12:42:08.662140   12604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.246.244 22 <nil> <nil>}
	I1028 12:42:08.662219   12604 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:42:08.794058   12604 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730119328.809205142
	
	I1028 12:42:08.794153   12604 fix.go:216] guest clock: 1730119328.809205142
	I1028 12:42:08.794153   12604 fix.go:229] Guest: 2024-10-28 12:42:08.809205142 +0000 UTC Remote: 2024-10-28 12:42:03.6761207 +0000 UTC m=+98.447569501 (delta=5.133084442s)
	I1028 12:42:08.794318   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:42:11.040877   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:42:11.040877   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:42:11.041608   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:42:13.716044   12604 main.go:141] libmachine: [stdout =====>] : 172.27.246.244
	
	I1028 12:42:13.716044   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:42:13.723562   12604 main.go:141] libmachine: Using SSH client type: native
	I1028 12:42:13.723769   12604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe23340] 0xe25e80 <nil>  [] 0s} 172.27.246.244 22 <nil> <nil>}
	I1028 12:42:13.723769   12604 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1730119328
	I1028 12:42:13.858109   12604 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 28 12:42:08 UTC 2024
	
	I1028 12:42:13.858232   12604 fix.go:236] clock set: Mon Oct 28 12:42:08 UTC 2024
	 (err=<nil>)
	I1028 12:42:13.858232   12604 start.go:83] releasing machines lock for "multinode-071500", held for 1m42.5838573s
	I1028 12:42:13.858455   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:42:16.174815   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:42:16.174815   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:42:16.175884   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:42:18.856375   12604 main.go:141] libmachine: [stdout =====>] : 172.27.246.244
	
	I1028 12:42:18.856375   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:42:18.861260   12604 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1028 12:42:18.861260   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:42:18.870657   12604 ssh_runner.go:195] Run: cat /version.json
	I1028 12:42:18.870657   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-071500 ).state
	I1028 12:42:21.187443   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:42:21.187872   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:42:21.187956   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:42:21.197181   12604 main.go:141] libmachine: [stdout =====>] : Running
	
	I1028 12:42:21.197181   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:42:21.197767   12604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-071500 ).networkadapters[0]).ipaddresses[0]
	I1028 12:42:23.963532   12604 main.go:141] libmachine: [stdout =====>] : 172.27.246.244
	
	I1028 12:42:23.963532   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:42:23.964692   12604 sshutil.go:53] new ssh client: &{IP:172.27.246.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:42:23.987878   12604 main.go:141] libmachine: [stdout =====>] : 172.27.246.244
	
	I1028 12:42:23.988084   12604 main.go:141] libmachine: [stderr =====>] : 
	I1028 12:42:23.988848   12604 sshutil.go:53] new ssh client: &{IP:172.27.246.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-071500\id_rsa Username:docker}
	I1028 12:42:24.065036   12604 command_runner.go:130] > {"iso_version": "v1.34.0-1729002252-19806", "kicbase_version": "v0.0.45-1728382586-19774", "minikube_version": "v1.34.0", "commit": "0b046a85be42f4631dd3453091a30d7fc1803a43"}
	I1028 12:42:24.065036   12604 ssh_runner.go:235] Completed: cat /version.json: (5.1943199s)
	I1028 12:42:24.077661   12604 ssh_runner.go:195] Run: systemctl --version
	I1028 12:42:24.078195   12604 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I1028 12:42:24.078835   12604 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.2175163s)
	W1028 12:42:24.078835   12604 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1028 12:42:24.086902   12604 command_runner.go:130] > systemd 252 (252)
	I1028 12:42:24.086902   12604 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1028 12:42:24.099121   12604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 12:42:24.110107   12604 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1028 12:42:24.111152   12604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:42:24.121219   12604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:42:24.161447   12604 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1028 12:42:24.161527   12604 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:42:24.161527   12604 start.go:495] detecting cgroup driver to use...
	I1028 12:42:24.161527   12604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1028 12:42:24.181983   12604 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1028 12:42:24.181983   12604 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1028 12:42:24.207837   12604 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1028 12:42:24.220786   12604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1028 12:42:24.255199   12604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 12:42:24.276470   12604 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 12:42:24.290140   12604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 12:42:24.322218   12604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 12:42:24.353697   12604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 12:42:24.384704   12604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 12:42:24.418487   12604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:42:24.450794   12604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 12:42:24.484377   12604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 12:42:24.516399   12604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 12:42:24.548635   12604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:42:24.567732   12604 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:42:24.567732   12604 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:42:24.580015   12604 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:42:24.612451   12604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:42:24.640522   12604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:42:24.847440   12604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 12:42:24.879854   12604 start.go:495] detecting cgroup driver to use...
	I1028 12:42:24.892776   12604 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 12:42:24.918777   12604 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1028 12:42:24.918777   12604 command_runner.go:130] > [Unit]
	I1028 12:42:24.918777   12604 command_runner.go:130] > Description=Docker Application Container Engine
	I1028 12:42:24.918777   12604 command_runner.go:130] > Documentation=https://docs.docker.com
	I1028 12:42:24.918777   12604 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1028 12:42:24.918777   12604 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1028 12:42:24.918777   12604 command_runner.go:130] > StartLimitBurst=3
	I1028 12:42:24.918777   12604 command_runner.go:130] > StartLimitIntervalSec=60
	I1028 12:42:24.918777   12604 command_runner.go:130] > [Service]
	I1028 12:42:24.918777   12604 command_runner.go:130] > Type=notify
	I1028 12:42:24.918777   12604 command_runner.go:130] > Restart=on-failure
	I1028 12:42:24.918777   12604 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1028 12:42:24.918777   12604 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1028 12:42:24.918777   12604 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1028 12:42:24.918777   12604 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1028 12:42:24.918777   12604 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1028 12:42:24.918777   12604 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1028 12:42:24.918777   12604 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1028 12:42:24.918777   12604 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1028 12:42:24.918777   12604 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1028 12:42:24.918777   12604 command_runner.go:130] > ExecStart=
	I1028 12:42:24.918777   12604 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1028 12:42:24.918777   12604 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1028 12:42:24.919320   12604 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1028 12:42:24.919369   12604 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1028 12:42:24.919369   12604 command_runner.go:130] > LimitNOFILE=infinity
	I1028 12:42:24.919369   12604 command_runner.go:130] > LimitNPROC=infinity
	I1028 12:42:24.919369   12604 command_runner.go:130] > LimitCORE=infinity
	I1028 12:42:24.919369   12604 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1028 12:42:24.919369   12604 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1028 12:42:24.919369   12604 command_runner.go:130] > TasksMax=infinity
	I1028 12:42:24.919435   12604 command_runner.go:130] > TimeoutStartSec=0
	I1028 12:42:24.919435   12604 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1028 12:42:24.919435   12604 command_runner.go:130] > Delegate=yes
	I1028 12:42:24.919478   12604 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1028 12:42:24.919478   12604 command_runner.go:130] > KillMode=process
	I1028 12:42:24.919478   12604 command_runner.go:130] > [Install]
	I1028 12:42:24.919544   12604 command_runner.go:130] > WantedBy=multi-user.target
	I1028 12:42:24.931714   12604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:42:24.967821   12604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:42:25.016381   12604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:42:25.056079   12604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 12:42:25.094996   12604 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1028 12:42:25.159825   12604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 12:42:25.186314   12604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:42:25.220548   12604 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1028 12:42:25.235094   12604 ssh_runner.go:195] Run: which cri-dockerd
	I1028 12:42:25.241464   12604 command_runner.go:130] > /usr/bin/cri-dockerd
	I1028 12:42:25.254119   12604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 12:42:25.272536   12604 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1028 12:42:25.319366   12604 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 12:42:25.538591   12604 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 12:42:25.739386   12604 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 12:42:25.739386   12604 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 12:42:25.783098   12604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:42:25.993634   12604 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 12:43:27.113533   12604 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I1028 12:43:27.113588   12604 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I1028 12:43:27.113676   12604 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.11926s)
	I1028 12:43:27.126451   12604 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1028 12:43:27.155131   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 systemd[1]: Starting Docker Application Container Engine...
	I1028 12:43:27.155186   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[644]: time="2024-10-28T12:41:56.628858702Z" level=info msg="Starting up"
	I1028 12:43:27.155186   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[644]: time="2024-10-28T12:41:56.631454007Z" level=info msg="containerd not running, starting managed containerd"
	I1028 12:43:27.155186   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[644]: time="2024-10-28T12:41:56.632843863Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=650
	I1028 12:43:27.155186   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.672469759Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I1028 12:43:27.155282   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.704312143Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1028 12:43:27.155316   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.704439248Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1028 12:43:27.155367   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.704540552Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1028 12:43:27.155367   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.704623055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1028 12:43:27.155422   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.705268681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1028 12:43:27.155457   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.705370685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1028 12:43:27.155499   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.705920807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.706053413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.706078114Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.706093114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.706700239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.707796283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.712491472Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.712666079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.712896689Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.713001893Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.713767024Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.713826726Z" level=info msg="metadata content store policy set" policy=shared
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.719841868Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.719959273Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.719986774Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720014175Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720042276Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720122780Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720478394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720707503Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1028 12:43:27.155586   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720862910Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1028 12:43:27.156221   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720887711Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1028 12:43:27.156221   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720913812Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1028 12:43:27.156292   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720932312Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1028 12:43:27.156292   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720947213Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1028 12:43:27.156292   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720980614Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1028 12:43:27.156467   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720997915Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1028 12:43:27.156467   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721017216Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1028 12:43:27.156554   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721032616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1028 12:43:27.156634   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721046117Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1028 12:43:27.156677   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721072318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1028 12:43:27.156705   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721094819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1028 12:43:27.156731   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721112420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1028 12:43:27.156731   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721127520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1028 12:43:27.156731   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721142221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1028 12:43:27.156731   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721157221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1028 12:43:27.156731   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721182122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1028 12:43:27.156731   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721205723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1028 12:43:27.156731   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721222724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1028 12:43:27.156731   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721257325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1028 12:43:27.156731   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721296527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1028 12:43:27.156731   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721312828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1028 12:43:27.156731   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721327228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1028 12:43:27.156731   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721376130Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1028 12:43:27.156731   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721420632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1028 12:43:27.156731   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721436033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1028 12:43:27.156731   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721450533Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1028 12:43:27.156731   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721591639Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1028 12:43:27.157283   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721652141Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I1028 12:43:27.157335   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721668142Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1028 12:43:27.157335   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721682543Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I1028 12:43:27.157395   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721695743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1028 12:43:27.157429   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721716444Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1028 12:43:27.157476   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721863550Z" level=info msg="NRI interface is disabled by configuration."
	I1028 12:43:27.157520   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.722838289Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1028 12:43:27.157520   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.722929693Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1028 12:43:27.157520   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.723012996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1028 12:43:27.157520   12604 command_runner.go:130] > Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.723044097Z" level=info msg="containerd successfully booted in 0.054172s"
	I1028 12:43:27.157627   12604 command_runner.go:130] > Oct 28 12:41:57 multinode-071500 dockerd[644]: time="2024-10-28T12:41:57.692157442Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1028 12:43:27.157679   12604 command_runner.go:130] > Oct 28 12:41:57 multinode-071500 dockerd[644]: time="2024-10-28T12:41:57.929195297Z" level=info msg="Loading containers: start."
	I1028 12:43:27.157696   12604 command_runner.go:130] > Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.201496010Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1028 12:43:27.157810   12604 command_runner.go:130] > Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.362727428Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I1028 12:43:27.157810   12604 command_runner.go:130] > Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.474601789Z" level=info msg="Loading containers: done."
	I1028 12:43:27.157810   12604 command_runner.go:130] > Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.502698834Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I1028 12:43:27.157894   12604 command_runner.go:130] > Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.502992445Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I1028 12:43:27.157894   12604 command_runner.go:130] > Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.503156751Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	I1028 12:43:27.157894   12604 command_runner.go:130] > Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.504007183Z" level=info msg="Daemon has completed initialization"
	I1028 12:43:27.157985   12604 command_runner.go:130] > Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.568663388Z" level=info msg="API listen on /var/run/docker.sock"
	I1028 12:43:27.158046   12604 command_runner.go:130] > Oct 28 12:41:58 multinode-071500 systemd[1]: Started Docker Application Container Engine.
	I1028 12:43:27.158111   12604 command_runner.go:130] > Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.569876133Z" level=info msg="API listen on [::]:2376"
	I1028 12:43:27.158111   12604 command_runner.go:130] > Oct 28 12:42:26 multinode-071500 dockerd[644]: time="2024-10-28T12:42:26.037982702Z" level=info msg="Processing signal 'terminated'"
	I1028 12:43:27.158151   12604 command_runner.go:130] > Oct 28 12:42:26 multinode-071500 systemd[1]: Stopping Docker Application Container Engine...
	I1028 12:43:27.158151   12604 command_runner.go:130] > Oct 28 12:42:26 multinode-071500 dockerd[644]: time="2024-10-28T12:42:26.041340108Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1028 12:43:27.158151   12604 command_runner.go:130] > Oct 28 12:42:26 multinode-071500 dockerd[644]: time="2024-10-28T12:42:26.041546508Z" level=info msg="Daemon shutdown complete"
	I1028 12:43:27.158318   12604 command_runner.go:130] > Oct 28 12:42:26 multinode-071500 dockerd[644]: time="2024-10-28T12:42:26.041620508Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1028 12:43:27.158410   12604 command_runner.go:130] > Oct 28 12:42:26 multinode-071500 dockerd[644]: time="2024-10-28T12:42:26.041663608Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1028 12:43:27.158437   12604 command_runner.go:130] > Oct 28 12:42:27 multinode-071500 systemd[1]: docker.service: Deactivated successfully.
	I1028 12:43:27.158437   12604 command_runner.go:130] > Oct 28 12:42:27 multinode-071500 systemd[1]: Stopped Docker Application Container Engine.
	I1028 12:43:27.158437   12604 command_runner.go:130] > Oct 28 12:42:27 multinode-071500 systemd[1]: Starting Docker Application Container Engine...
	I1028 12:43:27.158437   12604 command_runner.go:130] > Oct 28 12:42:27 multinode-071500 dockerd[1082]: time="2024-10-28T12:42:27.099988190Z" level=info msg="Starting up"
	I1028 12:43:27.158437   12604 command_runner.go:130] > Oct 28 12:43:27 multinode-071500 dockerd[1082]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I1028 12:43:27.158437   12604 command_runner.go:130] > Oct 28 12:43:27 multinode-071500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I1028 12:43:27.158437   12604 command_runner.go:130] > Oct 28 12:43:27 multinode-071500 systemd[1]: docker.service: Failed with result 'exit-code'.
	I1028 12:43:27.158437   12604 command_runner.go:130] > Oct 28 12:43:27 multinode-071500 systemd[1]: Failed to start Docker Application Container Engine.
	I1028 12:43:27.167813   12604 out.go:201] 
	W1028 12:43:27.170801   12604 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 28 12:41:56 multinode-071500 systemd[1]: Starting Docker Application Container Engine...
	Oct 28 12:41:56 multinode-071500 dockerd[644]: time="2024-10-28T12:41:56.628858702Z" level=info msg="Starting up"
	Oct 28 12:41:56 multinode-071500 dockerd[644]: time="2024-10-28T12:41:56.631454007Z" level=info msg="containerd not running, starting managed containerd"
	Oct 28 12:41:56 multinode-071500 dockerd[644]: time="2024-10-28T12:41:56.632843863Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=650
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.672469759Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.704312143Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.704439248Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.704540552Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.704623055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.705268681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.705370685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.705920807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.706053413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.706078114Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.706093114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.706700239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.707796283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.712491472Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.712666079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.712896689Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.713001893Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.713767024Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.713826726Z" level=info msg="metadata content store policy set" policy=shared
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.719841868Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.719959273Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.719986774Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720014175Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720042276Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720122780Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720478394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720707503Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720862910Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720887711Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720913812Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720932312Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720947213Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720980614Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720997915Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721017216Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721032616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721046117Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721072318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721094819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721112420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721127520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721142221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721157221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721182122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721205723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721222724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721257325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721296527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721312828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721327228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721376130Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721420632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721436033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721450533Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721591639Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721652141Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721668142Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721682543Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721695743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721716444Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721863550Z" level=info msg="NRI interface is disabled by configuration."
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.722838289Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.722929693Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.723012996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.723044097Z" level=info msg="containerd successfully booted in 0.054172s"
	Oct 28 12:41:57 multinode-071500 dockerd[644]: time="2024-10-28T12:41:57.692157442Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 28 12:41:57 multinode-071500 dockerd[644]: time="2024-10-28T12:41:57.929195297Z" level=info msg="Loading containers: start."
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.201496010Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.362727428Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.474601789Z" level=info msg="Loading containers: done."
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.502698834Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.502992445Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.503156751Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.504007183Z" level=info msg="Daemon has completed initialization"
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.568663388Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 28 12:41:58 multinode-071500 systemd[1]: Started Docker Application Container Engine.
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.569876133Z" level=info msg="API listen on [::]:2376"
	Oct 28 12:42:26 multinode-071500 dockerd[644]: time="2024-10-28T12:42:26.037982702Z" level=info msg="Processing signal 'terminated'"
	Oct 28 12:42:26 multinode-071500 systemd[1]: Stopping Docker Application Container Engine...
	Oct 28 12:42:26 multinode-071500 dockerd[644]: time="2024-10-28T12:42:26.041340108Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 28 12:42:26 multinode-071500 dockerd[644]: time="2024-10-28T12:42:26.041546508Z" level=info msg="Daemon shutdown complete"
	Oct 28 12:42:26 multinode-071500 dockerd[644]: time="2024-10-28T12:42:26.041620508Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 28 12:42:26 multinode-071500 dockerd[644]: time="2024-10-28T12:42:26.041663608Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 28 12:42:27 multinode-071500 systemd[1]: docker.service: Deactivated successfully.
	Oct 28 12:42:27 multinode-071500 systemd[1]: Stopped Docker Application Container Engine.
	Oct 28 12:42:27 multinode-071500 systemd[1]: Starting Docker Application Container Engine...
	Oct 28 12:42:27 multinode-071500 dockerd[1082]: time="2024-10-28T12:42:27.099988190Z" level=info msg="Starting up"
	Oct 28 12:43:27 multinode-071500 dockerd[1082]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 28 12:43:27 multinode-071500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 28 12:43:27 multinode-071500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 28 12:43:27 multinode-071500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 28 12:41:56 multinode-071500 systemd[1]: Starting Docker Application Container Engine...
	Oct 28 12:41:56 multinode-071500 dockerd[644]: time="2024-10-28T12:41:56.628858702Z" level=info msg="Starting up"
	Oct 28 12:41:56 multinode-071500 dockerd[644]: time="2024-10-28T12:41:56.631454007Z" level=info msg="containerd not running, starting managed containerd"
	Oct 28 12:41:56 multinode-071500 dockerd[644]: time="2024-10-28T12:41:56.632843863Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=650
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.672469759Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.704312143Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.704439248Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.704540552Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.704623055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.705268681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.705370685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.705920807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.706053413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.706078114Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.706093114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.706700239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.707796283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.712491472Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.712666079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.712896689Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.713001893Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.713767024Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.713826726Z" level=info msg="metadata content store policy set" policy=shared
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.719841868Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.719959273Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.719986774Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720014175Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720042276Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720122780Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720478394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720707503Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720862910Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720887711Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720913812Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720932312Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720947213Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720980614Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.720997915Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721017216Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721032616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721046117Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721072318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721094819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721112420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721127520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721142221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721157221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721182122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721205723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721222724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721257325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721296527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721312828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721327228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721376130Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721420632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721436033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721450533Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721591639Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721652141Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721668142Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721682543Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721695743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721716444Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.721863550Z" level=info msg="NRI interface is disabled by configuration."
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.722838289Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.722929693Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.723012996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 28 12:41:56 multinode-071500 dockerd[650]: time="2024-10-28T12:41:56.723044097Z" level=info msg="containerd successfully booted in 0.054172s"
	Oct 28 12:41:57 multinode-071500 dockerd[644]: time="2024-10-28T12:41:57.692157442Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 28 12:41:57 multinode-071500 dockerd[644]: time="2024-10-28T12:41:57.929195297Z" level=info msg="Loading containers: start."
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.201496010Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.362727428Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.474601789Z" level=info msg="Loading containers: done."
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.502698834Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.502992445Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.503156751Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.504007183Z" level=info msg="Daemon has completed initialization"
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.568663388Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 28 12:41:58 multinode-071500 systemd[1]: Started Docker Application Container Engine.
	Oct 28 12:41:58 multinode-071500 dockerd[644]: time="2024-10-28T12:41:58.569876133Z" level=info msg="API listen on [::]:2376"
	Oct 28 12:42:26 multinode-071500 dockerd[644]: time="2024-10-28T12:42:26.037982702Z" level=info msg="Processing signal 'terminated'"
	Oct 28 12:42:26 multinode-071500 systemd[1]: Stopping Docker Application Container Engine...
	Oct 28 12:42:26 multinode-071500 dockerd[644]: time="2024-10-28T12:42:26.041340108Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 28 12:42:26 multinode-071500 dockerd[644]: time="2024-10-28T12:42:26.041546508Z" level=info msg="Daemon shutdown complete"
	Oct 28 12:42:26 multinode-071500 dockerd[644]: time="2024-10-28T12:42:26.041620508Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 28 12:42:26 multinode-071500 dockerd[644]: time="2024-10-28T12:42:26.041663608Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 28 12:42:27 multinode-071500 systemd[1]: docker.service: Deactivated successfully.
	Oct 28 12:42:27 multinode-071500 systemd[1]: Stopped Docker Application Container Engine.
	Oct 28 12:42:27 multinode-071500 systemd[1]: Starting Docker Application Container Engine...
	Oct 28 12:42:27 multinode-071500 dockerd[1082]: time="2024-10-28T12:42:27.099988190Z" level=info msg="Starting up"
	Oct 28 12:43:27 multinode-071500 dockerd[1082]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 28 12:43:27 multinode-071500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 28 12:43:27 multinode-071500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 28 12:43:27 multinode-071500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1028 12:43:27.171485   12604 out.go:270] * 
	* 
	W1028 12:43:27.172705   12604 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:43:27.176319   12604 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-071500 --wait=true -v=8 --alsologtostderr --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500: exit status 6 (12.7244534s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:43:40.081172   13944 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-071500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/RestartMultiNode (194.94s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (473.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-071500
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-071500-m01 --driver=hyperv
E1028 12:44:45.579951    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:46:39.734357    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:464: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-071500-m01 --driver=hyperv: (3m25.8248508s)
multinode_test.go:466: expected start profile command to fail. args "out/minikube-windows-amd64.exe start -p multinode-071500-m01 --driver=hyperv"
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-071500-m02 --driver=hyperv
E1028 12:47:48.671403    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:49:45.583502    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-071500-m02 --driver=hyperv: (3m25.9312761s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-071500
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-071500: exit status 103 (7.8442457s)

                                                
                                                
-- stdout --
	* The control-plane node multinode-071500 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p multinode-071500"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-071500-m02
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-071500-m02: (40.8832556s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-071500 -n multinode-071500: exit status 6 (12.8267233s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:51:33.655785    9972 status.go:458] kubeconfig endpoint: get endpoint: "multinode-071500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-071500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (473.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (299.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-767100 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-767100 --driver=hyperv: exit status 1 (4m59.6918548s)

                                                
                                                
-- stdout --
	* [NoKubernetes-767100] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-767100" primary control-plane node in "NoKubernetes-767100" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-767100 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-767100 -n NoKubernetes-767100
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-767100 -n NoKubernetes-767100: exit status 7 (293.6049ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-767100" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (299.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (10800.515s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-263100 "pgrep -a kubelet"
panic: test timed out after 3h0m0s
	running tests:
		TestNetworkPlugins (28m47s)
		TestNetworkPlugins/group/auto (8m0s)
		TestNetworkPlugins/group/auto/KubeletFlags (8s)
		TestNetworkPlugins/group/calico (5m46s)
		TestNetworkPlugins/group/calico/Start (5m46s)
		TestNetworkPlugins/group/custom-flannel (4m25s)
		TestNetworkPlugins/group/custom-flannel/Start (4m25s)
		TestPause (10m45s)
		TestPause/serial (10m45s)
		TestPause/serial/SecondStartNoReconfiguration (2m27s)
		TestStartStop (28m32s)

                                                
                                                
goroutine 2386 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 5 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x489
testing.tRunner(0xc000161860, 0xc00008bbc8)
	/usr/local/go/src/testing/testing.go:1696 +0x104
testing.runTests(0xc0008ba000, {0x52e86a0, 0x2a, 0x2a}, {0xffffffffffffffff?, 0x53d979?, 0x530fa20?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc0008a2000)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc0008a2000)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:129 +0xa8

                                                
                                                
goroutine 11 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000093480)
	/home/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/home/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 1019 [chan send, 147 minutes]:
os/exec.(*Cmd).watchCtx(0xc001a4e000, 0xc001a2ea10)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1018
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 119 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0008b2550, 0x3b)
	/usr/local/go/src/runtime/sema.go:587 +0x15d
sync.(*Cond).Wait(0xc000c6bd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x39b8080)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008b2580)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006081a0, {0x3963ec0, 0xc000998720}, 0x1, 0xc000904070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006081a0, 0x3b9aca00, 0x0, 0x1, 0xc000904070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 151
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1241 [chan send, 142 minutes]:
os/exec.(*Cmd).watchCtx(0xc001bac600, 0xc00195aa10)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 855
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 917 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc001bba990, 0x35)
	/usr/local/go/src/runtime/sema.go:587 +0x15d
sync.(*Cond).Wait(0xc001515d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x39b8080)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001bba9c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0019b0bc0, {0x3963ec0, 0xc000b65ce0}, 0x1, 0xc000904070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0019b0bc0, 0x3b9aca00, 0x0, 0x1, 0xc000904070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 954
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 120 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x399c930, 0xc000904070}, 0xc00148bf50, 0xc00148bf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x399c930, 0xc000904070}, 0x10?, 0xc00148bf50, 0xc00148bf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x399c930?, 0xc000904070?}, 0xc00148bfd0?, 0x8aae64?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x60f825?, 0xc000467e00?, 0xc000904310?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 151
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 150 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3992be0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 149
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 151 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008b2580, 0xc000904070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 149
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2287 [syscall, 7 minutes]:
syscall.SyscallN(0xc001489c7e?, {0xc001489c40?, 0x0?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall(0x10?, 0xc001489ca8?, 0x1000000485ac5?, 0x1e?, 0x3?)
	/usr/local/go/src/runtime/syscall_windows.go:458 +0x2f
syscall.WaitForSingleObject(0x54c, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1140 +0x5d
os.(*Process).wait(0xc000467c80?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000467c80)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc000467c80)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc001a7a9c0, 0xc000467c80)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc001a7a9c0)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc001a7a9c0, 0xc0018f01b0)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2194
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2236 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0008d7180)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00070bba0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00070bba0)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00070bba0)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00070bba0, 0xc00081aac0)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2233
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 121 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 120
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2238 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0008d7180)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001a7a1a0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001a7a1a0)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001a7a1a0)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001a7a1a0, 0xc00081ab40)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2233
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 622 [IO wait, 161 minutes]:
internal/poll.runtime_pollWait(0x1ac5057e538, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0x5385b5?, 0x4e09e5?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc001932020, 0xc00172bb88)
	/usr/local/go/src/internal/poll/fd_windows.go:177 +0x105
internal/poll.(*FD).acceptOne(0xc001932008, 0x3f4, {0xc000d88000?, 0x2000?, 0x0?}, 0x172bc1c?)
	/usr/local/go/src/internal/poll/fd_windows.go:946 +0x65
internal/poll.(*FD).Accept(0xc001932008, 0xc00172bd68)
	/usr/local/go/src/internal/poll/fd_windows.go:980 +0x1b6
net.(*netFD).accept(0xc001932008)
	/usr/local/go/src/net/fd_windows.go:182 +0x4b
net.(*TCPListener).accept(0xc001afa2c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc001afa2c0)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc000a4a5a0, {0x398fac0, 0xc001afa2c0})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc000a4a5a0)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00070b860)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 619
	/home/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 2351 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc001a4e180, 0xc00195a2a0)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2348
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 2237 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0008d7180)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001a7a000)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001a7a000)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001a7a000)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001a7a000, 0xc00081ab00)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2233
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 954 [chan receive, 149 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001bba9c0, 0xc000904070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 907
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2154 [chan receive]:
testing.(*T).Run(0xc000698b60, {0x2c981ac?, 0x395aa10?}, 0xc00094c5a0)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000698b60)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:126 +0x7cf
testing.tRunner(0xc000698b60, 0xc000778100)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2153
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2370 [syscall, 3 minutes]:
syscall.SyscallN(0x28ca060?, {0xc001ec5af0?, 0xc001ec5b20?, 0x493c85?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall6(0x4e09e5?, 0x1ac0adf0598?, 0x485a77?, 0xc000c11f80?, 0x10?, 0x10?, 0x101004782c6?, 0x1ac50477600?)
	/usr/local/go/src/runtime/syscall_windows.go:465 +0x5c
syscall.readFile(0x474, {0xc000294671?, 0x198f, 0x539b5f?}, 0x535dfc0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc00187c008?, {0xc000294671?, 0x4000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc00187c008, {0xc000294671, 0x198f, 0x198f})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000127020, {0xc000294671?, 0x4f0180?, 0x2000?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001aca2d0, {0x39624a0, 0xc000127040})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3962620, 0xc001aca2d0}, {0x39624a0, 0xc000127040}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3962620, 0xc001aca2d0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x52911d0?, {0x3962620?, 0xc001aca2d0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3962620, 0xc001aca2d0}, {0x3962580, 0xc000127020}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001ab80e0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2352
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2161 [chan receive, 5 minutes]:
testing.(*T).Run(0xc000699a00, {0x2c89b32?, 0x395aa10?}, 0xc001aca000)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000699a00)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5be
testing.tRunner(0xc000699a00, 0xc000778780)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2153
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2349 [syscall, 5 minutes]:
syscall.SyscallN(0x1ac507e3208?, {0xc002087af0?, 0xc002087b20?, 0x493c85?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall6(0x4e09e5?, 0x1ac0adf0a28?, 0x2004d?, 0x1?, 0xc001566cb0?, 0xc002087ba8?, 0x101004afc39?, 0x1ac5058c348?)
	/usr/local/go/src/runtime/syscall_windows.go:465 +0x5c
syscall.readFile(0x4ac, {0xc000cfea04?, 0x5fc, 0x0?}, 0xc000c27c00?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc000568d88?, {0xc000cfea04?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc000568d88, {0xc000cfea04, 0x5fc, 0x5fc})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000126f58, {0xc000cfea04?, 0x0?, 0x204?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001aca0c0, {0x39624a0, 0xc000948118})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3962620, 0xc001aca0c0}, {0x39624a0, 0xc000948118}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc002087e78?, {0x3962620, 0xc001aca0c0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x52911d0?, {0x3962620?, 0xc001aca0c0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3962620, 0xc001aca0c0}, {0x3962580, 0xc000126f58}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001ab98f0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2348
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2354 [select, 7 minutes]:
os/exec.(*Cmd).watchCtx(0xc000467c80, 0xc0017ec380)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2287
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 2239 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0008d7180)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001a7a340)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001a7a340)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001a7a340)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001a7a340, 0xc00081ac00)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2233
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2372 [syscall]:
syscall.SyscallN(0xc0007c5bd6?, {0xc0007c5b98?, 0x0?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall(0x10?, 0xc0007c5c00?, 0x1000000485ac5?, 0x1e?, 0x3?)
	/usr/local/go/src/runtime/syscall_windows.go:458 +0x2f
syscall.WaitForSingleObject(0x6c0, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1140 +0x5d
os.(*Process).wait(0xc001a4e480?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001a4e480)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc001a4e480)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc001a7a680, 0xc001a4e480)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.3(0xc001a7a680)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:133 +0x1b5
testing.tRunner(0xc001a7a680, 0xc00094c5a0)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2154
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2289 [syscall]:
syscall.SyscallN(0xffffff01?, {0xc0019e3af0?, 0x0?, 0x246?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall6(0x0?, 0x0?, 0x2f2066782d207261?, 0x6564616f6c657270?, 0x7a6c2e7261742e64?, 0x38392e3728203a34?, 0xa29733637303135?, 0x3331203832303149?)
	/usr/local/go/src/runtime/syscall_windows.go:465 +0x5c
syscall.readFile(0x2f4, {0xc0014970a7?, 0xf59, 0x539b5f?}, 0x222e646569666963?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc00187c908?, {0xc0014970a7?, 0x8000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc00187c908, {0xc0014970a7, 0xf59, 0xf59})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000c22058, {0xc0014970a7?, 0x4f0180?, 0x3f35?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0018f02a0, {0x39624a0, 0xc000948258})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3962620, 0xc0018f02a0}, {0x39624a0, 0xc000948258}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3962620, 0xc0018f02a0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x52911d0?, {0x3962620?, 0xc0018f02a0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3962620, 0xc0018f02a0}, {0x3962580, 0xc000c22058}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001ab80e0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2287
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2054 [chan receive, 29 minutes]:
testing.(*T).Run(0xc0006989c0, {0x2c89b2d?, 0xc00149ff60?}, 0xc0017020d8)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0006989c0)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc0006989c0, 0x362b078)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2352 [syscall, 3 minutes]:
syscall.SyscallN(0xc000639b1e?, {0xc000639ae0?, 0x0?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall(0x10?, 0xc000639b48?, 0x1000000485ac5?, 0x1e?, 0x3?)
	/usr/local/go/src/runtime/syscall_windows.go:458 +0x2f
syscall.WaitForSingleObject(0x6ec, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1140 +0x5d
os.(*Process).wait(0xc001a4e300?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001a4e300)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc001a4e300)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc001a7ad00, 0xc001a4e300)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateStartNoReconfigure({0x399c618, 0xc0004284d0}, 0xc001a7ad00, {0xc001980600?, 0x7ff91ecc5f50?})
	/home/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:92 +0x237
k8s.io/minikube/test/integration.TestPause.func1.1(0xc001a7ad00)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:66 +0x43
testing.tRunner(0xc001a7ad00, 0xc00081a2c0)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2340
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2159 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0008d7180)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0006996c0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0006996c0)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0006996c0)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0006996c0, 0xc000778500)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2153
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 953 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3992be0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 907
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2041 [chan receive, 29 minutes]:
testing.(*T).Run(0xc000691520, {0x2c89b2d?, 0x5cf693?}, 0x362b2b0)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestStartStop(0xc000691520)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc000691520, 0x362b0c0)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2374 [syscall]:
syscall.SyscallN(0xc?, {0xc000953af0?, 0xc000953b20?, 0x493c85?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall6(0x4e09e5?, 0x1ac0adf0108?, 0xc0014b0035?, 0x20000?, 0x0?, 0x1?, 0x101004782c6?, 0x1ac50491408?)
	/usr/local/go/src/runtime/syscall_windows.go:465 +0x5c
syscall.readFile(0x50c, {0xc0006bf400?, 0x200, 0x0?}, 0xc000953c04?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc00187cd88?, {0xc0006bf400?, 0x200?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc00187cd88, {0xc0006bf400, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000127080, {0xc0006bf400?, 0x4f0180?, 0x0?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001aca3c0, {0x39624a0, 0xc000d0c040})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3962620, 0xc001aca3c0}, {0x39624a0, 0xc000d0c040}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3962620, 0xc001aca3c0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x52911d0?, {0x3962620?, 0xc001aca3c0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3962620, 0xc001aca3c0}, {0x3962580, 0xc000127080}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc000d8e200?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2372
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2373 [syscall]:
syscall.SyscallN(0xc00149da98?, {0xc00149daf0?, 0xc00149db20?, 0x493c85?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall6(0x4e09e5?, 0x1ac0adf0598?, 0xc0014dc035?, 0xc000c11f80?, 0x10?, 0x10?, 0x1010149dbc8?, 0x1ac50491548?)
	/usr/local/go/src/runtime/syscall_windows.go:465 +0x5c
syscall.readFile(0x47c, {0xc000298000?, 0x200, 0x0?}, 0x4f?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc00187c6c8?, {0xc000298000?, 0x200?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc00187c6c8, {0xc000298000, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000127048, {0xc000298000?, 0xc00149dd50?, 0x0?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001aca390, {0x39624a0, 0xc0004e4020})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3962620, 0xc001aca390}, {0x39624a0, 0xc0004e4020}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00149de78?, {0x3962620, 0xc001aca390})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x52911d0?, {0x3962620?, 0xc001aca390?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3962620, 0xc001aca390}, {0x3962580, 0xc000127048}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc00195ae00?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2372
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2348 [syscall, 5 minutes]:
syscall.SyscallN(0xc001557c7e?, {0xc001557c40?, 0x0?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall(0x10?, 0xc001557ca8?, 0x1000000485ac5?, 0x1e?, 0x3?)
	/usr/local/go/src/runtime/syscall_windows.go:458 +0x2f
syscall.WaitForSingleObject(0x4dc, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1140 +0x5d
os.(*Process).wait(0xc001a4e180?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001a4e180)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc001a4e180)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc001a7ab60, 0xc001a4e180)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc001a7ab60)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc001a7ab60, 0xc001aca000)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2161
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 919 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 918
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 918 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x399c930, 0xc000904070}, 0xc001555f50, 0xc001555f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x399c930, 0xc000904070}, 0xf0?, 0xc001555f50, 0xc001555f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x399c930?, 0xc000904070?}, 0x0?, 0x205d65726f635b20?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x60f825?, 0xc0015c4000?, 0xc00159eaf0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 954
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2353 [syscall, 3 minutes]:
syscall.SyscallN(0xc?, {0xc001685af0?, 0xc001685b20?, 0x493c85?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall6(0x4e09e5?, 0x1ac0adf0598?, 0x41?, 0x2?, 0x0?, 0x1?, 0x101004782c6?, 0x1ac50492808?)
	/usr/local/go/src/runtime/syscall_windows.go:465 +0x5c
syscall.readFile(0x6e8, {0xc000c261e7?, 0x219, 0x539b5f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc000569448?, {0xc000c261e7?, 0x400?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc000569448, {0xc000c261e7, 0x219, 0x219})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000126fe0, {0xc000c261e7?, 0xc001685d50?, 0x68?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001aca2a0, {0x39624a0, 0xc0009480b8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3962620, 0xc001aca2a0}, {0x39624a0, 0xc0009480b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001685e78?, {0x3962620, 0xc001aca2a0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x52911d0?, {0x3962620?, 0xc001aca2a0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3962620, 0xc001aca2a0}, {0x3962580, 0xc000126fe0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001ab8540?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2352
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2350 [syscall, 5 minutes]:
syscall.SyscallN(0x1ac504905a8?, {0xc001995af0?, 0xc001995b20?, 0x493c85?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall6(0x4e09e5?, 0x1ac0adf0a28?, 0xc001662067?, 0x0?, 0xc001566cb0?, 0xc001995bd0?, 0x101004782c6?, 0x1ac50699958?)
	/usr/local/go/src/runtime/syscall_windows.go:465 +0x5c
syscall.readFile(0x45c, {0xc000525c76?, 0x38a, 0x539b5f?}, 0x7?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc000569b08?, {0xc000525c76?, 0x2000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc000569b08, {0xc000525c76, 0x38a, 0x38a})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000126f90, {0xc000525c76?, 0x4f0180?, 0x1000?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001aca0f0, {0x39624a0, 0xc00090a510})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3962620, 0xc001aca0f0}, {0x39624a0, 0xc00090a510}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3962620, 0xc001aca0f0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x52911d0?, {0x3962620?, 0xc001aca0f0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3962620, 0xc001aca0f0}, {0x3962580, 0xc000126f90}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001821420?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2348
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2233 [chan receive, 29 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x489
testing.tRunner(0xc0006911e0, 0x362b2b0)
	/usr/local/go/src/testing/testing.go:1696 +0x104
created by testing.(*T).Run in goroutine 2041
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2234 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0008d7180)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000691380)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000691380)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000691380)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000691380, 0xc00081aa40)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2233
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2160 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0008d7180)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000699860)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000699860)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000699860)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000699860, 0xc000778580)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2153
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2157 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0008d7180)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000699380)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000699380)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000699380)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000699380, 0xc000778380)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2153
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2235 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0008d7180)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0001604e0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0001604e0)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0001604e0)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0001604e0, 0xc00081aa80)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2233
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2375 [select]:
os/exec.(*Cmd).watchCtx(0xc001a4e480, 0xc00195a540)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2372
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 2194 [chan receive, 7 minutes]:
testing.(*T).Run(0xc000699ba0, {0x2c89b32?, 0x395aa10?}, 0xc0018f01b0)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000699ba0)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5be
testing.tRunner(0xc000699ba0, 0xc000778800)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2153
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2158 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0008d7180)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000699520)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000699520)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000699520)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000699520, 0xc000778400)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2153
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2371 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc001a4e300, 0xc00195a3f0)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2352
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 2153 [chan receive, 29 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x489
testing.tRunner(0xc000698680, 0xc0017020d8)
	/usr/local/go/src/testing/testing.go:1696 +0x104
created by testing.(*T).Run in goroutine 2054
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2156 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0008d7180)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0006991e0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0006991e0)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0006991e0)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0006991e0, 0xc000778300)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2153
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2155 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0008d7180)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000699040)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000699040)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000699040)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000699040, 0xc000778200)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2153
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2288 [syscall, 3 minutes]:
syscall.SyscallN(0xc?, {0xc001af1af0?, 0xc001af1b20?, 0x493c85?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall6(0x4e09e5?, 0x1ac0adf0598?, 0x4d?, 0x0?, 0xc001af1bc8?, 0x5ccfca?, 0x101004782c6?, 0x1ac507e3208?)
	/usr/local/go/src/runtime/syscall_windows.go:465 +0x5c
syscall.readFile(0x590, {0xc0007fea2b?, 0x5d5, 0x539b5f?}, 0xc000505a40?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc00187c248?, {0xc0007fea2b?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc00187c248, {0xc0007fea2b, 0x5d5, 0x5d5})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000c22040, {0xc0007fea2b?, 0x4f0180?, 0x22a?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0018f0270, {0x39624a0, 0xc000c8a700})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3962620, 0xc0018f0270}, {0x39624a0, 0xc000c8a700}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x4aa1c5?, {0x3962620, 0xc0018f0270})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x52911d0?, {0x3962620?, 0xc0018f0270?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3962620, 0xc0018f0270}, {0x3962580, 0xc000c22040}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0x362b0a0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2287
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2340 [chan receive, 3 minutes]:
testing.(*T).Run(0xc001a7a4e0, {0x2cc5796?, 0x24?}, 0xc00081a2c0)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestPause.func1(0xc001a7a4e0)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:65 +0x1ee
testing.tRunner(0xc001a7a4e0, 0xc0005ca480)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2066
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2066 [chan receive, 11 minutes]:
testing.(*T).Run(0xc0006904e0, {0x2c8af0a?, 0xd18c2e2800?}, 0xc0005ca480)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestPause(0xc0006904e0)
	/home/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:41 +0x159
testing.tRunner(0xc0006904e0, 0x362b090)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                    

Test pass (151/206)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.39
4 TestDownloadOnly/v1.20.0/preload-exists 0.08
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.53
9 TestDownloadOnly/v1.20.0/DeleteAll 0.95
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.1
12 TestDownloadOnly/v1.31.2/json-events 10.96
13 TestDownloadOnly/v1.31.2/preload-exists 0
16 TestDownloadOnly/v1.31.2/kubectl 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.31
18 TestDownloadOnly/v1.31.2/DeleteAll 0.96
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 1.04
21 TestBinaryMirror 7.62
22 TestOffline 433.44
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.31
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.31
27 TestAddons/Setup 446.84
29 TestAddons/serial/Volcano 66.66
31 TestAddons/serial/GCPAuth/Namespaces 0.37
32 TestAddons/serial/GCPAuth/FakeCredentials 10.72
35 TestAddons/parallel/Registry 36.14
36 TestAddons/parallel/Ingress 63.22
37 TestAddons/parallel/InspektorGadget 27.56
38 TestAddons/parallel/MetricsServer 22.4
40 TestAddons/parallel/CSI 102.1
41 TestAddons/parallel/Headlamp 43.94
42 TestAddons/parallel/CloudSpanner 20.95
43 TestAddons/parallel/LocalPath 86.43
44 TestAddons/parallel/NvidiaDevicePlugin 22.07
45 TestAddons/parallel/Yakd 27.62
47 TestAddons/StoppedEnableDisable 55.18
48 TestCertOptions 586.16
49 TestCertExpiration 925.04
50 TestDockerFlags 432.34
51 TestForceSystemdFlag 570.19
52 TestForceSystemdEnv 521.11
59 TestErrorSpam/start 18.78
60 TestErrorSpam/status 38.86
61 TestErrorSpam/pause 24.23
62 TestErrorSpam/unpause 24.61
63 TestErrorSpam/stop 58.58
66 TestFunctional/serial/CopySyncFile 0.04
67 TestFunctional/serial/StartWithProxy 206.06
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 135.95
70 TestFunctional/serial/KubeContext 0.15
71 TestFunctional/serial/KubectlGetPods 0.25
74 TestFunctional/serial/CacheCmd/cache/add_remote 28.16
75 TestFunctional/serial/CacheCmd/cache/add_local 10.98
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.32
77 TestFunctional/serial/CacheCmd/cache/list 0.3
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.83
79 TestFunctional/serial/CacheCmd/cache/cache_reload 38.37
80 TestFunctional/serial/CacheCmd/cache/delete 0.62
81 TestFunctional/serial/MinikubeKubectlCmd 0.56
83 TestFunctional/serial/ExtraConfig 132.92
84 TestFunctional/serial/ComponentHealth 0.2
85 TestFunctional/serial/LogsCmd 9.08
86 TestFunctional/serial/LogsFileCmd 11.32
87 TestFunctional/serial/InvalidService 21.82
89 TestFunctional/parallel/ConfigCmd 2.23
93 TestFunctional/parallel/StatusCmd 45.65
97 TestFunctional/parallel/ServiceCmdConnect 27.86
98 TestFunctional/parallel/AddonsCmd 0.85
99 TestFunctional/parallel/PersistentVolumeClaim 39.6
101 TestFunctional/parallel/SSHCmd 21.73
102 TestFunctional/parallel/CpCmd 63.86
103 TestFunctional/parallel/MySQL 78.15
104 TestFunctional/parallel/FileSync 11.11
105 TestFunctional/parallel/CertSync 69.58
109 TestFunctional/parallel/NodeLabels 0.28
111 TestFunctional/parallel/NonActiveRuntimeDisabled 11.69
113 TestFunctional/parallel/License 3.76
114 TestFunctional/parallel/Version/short 0.31
115 TestFunctional/parallel/Version/components 8.44
116 TestFunctional/parallel/ImageCommands/ImageListShort 8.11
117 TestFunctional/parallel/ImageCommands/ImageListTable 8.04
118 TestFunctional/parallel/ImageCommands/ImageListJson 7.96
119 TestFunctional/parallel/ImageCommands/ImageListYaml 7.97
120 TestFunctional/parallel/ImageCommands/ImageBuild 36.21
121 TestFunctional/parallel/ImageCommands/Setup 2.48
122 TestFunctional/parallel/ServiceCmd/DeployApp 18.56
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 19.24
124 TestFunctional/parallel/ServiceCmd/List 15.18
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 19.53
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 10.24
128 TestFunctional/parallel/ServiceCmd/JSONOutput 15.59
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 19.89
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.82
139 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.97
142 TestFunctional/parallel/ImageCommands/ImageRemove 17.01
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 17.74
145 TestFunctional/parallel/DockerEnv/powershell 49.77
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 9.76
147 TestFunctional/parallel/UpdateContextCmd/no_changes 2.69
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.86
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.69
150 TestFunctional/parallel/ProfileCmd/profile_not_create 15.41
151 TestFunctional/parallel/ProfileCmd/profile_list 15.34
152 TestFunctional/parallel/ProfileCmd/profile_json_output 14.05
153 TestFunctional/delete_echo-server_images 0.22
154 TestFunctional/delete_my-image_image 0.11
155 TestFunctional/delete_minikube_cached_images 0.09
159 TestMultiControlPlane/serial/StartCluster 752.35
160 TestMultiControlPlane/serial/DeployApp 14.04
162 TestMultiControlPlane/serial/AddWorkerNode 277.33
163 TestMultiControlPlane/serial/NodeLabels 0.2
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 50.8
168 TestImageBuild/serial/Setup 200.38
169 TestImageBuild/serial/NormalBuild 10.48
170 TestImageBuild/serial/BuildWithBuildArg 9.1
171 TestImageBuild/serial/BuildWithDockerIgnore 8.47
172 TestImageBuild/serial/BuildWithSpecifiedDockerfile 8.53
176 TestJSONOutput/start/Command 238.41
177 TestJSONOutput/start/Audit 0.09
179 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/pause/Command 8.22
183 TestJSONOutput/pause/Audit 0.03
185 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/unpause/Command 8.08
189 TestJSONOutput/unpause/Audit 0.05
191 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/stop/Command 39.56
195 TestJSONOutput/stop/Audit 0
197 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
199 TestErrorJSONOutput 1.09
204 TestMainNoArgs 0.28
205 TestMinikubeProfile 546.52
208 TestMountStart/serial/StartWithMountFirst 164.39
209 TestMountStart/serial/VerifyMountFirst 9.91
210 TestMountStart/serial/StartWithMountSecond 159.96
211 TestMountStart/serial/VerifyMountSecond 9.76
212 TestMountStart/serial/DeleteFirst 31.73
213 TestMountStart/serial/VerifyMountPostDelete 10.31
214 TestMountStart/serial/Stop 27.75
215 TestMountStart/serial/RestartStopped 123.85
216 TestMountStart/serial/VerifyMountPostStop 9.95
236 TestPreload 544.65
237 TestScheduledStopWindows 340.98
242 TestRunningBinaryUpgrade 1082.6
244 TestKubernetesUpgrade 1094.12
247 TestNoKubernetes/serial/StartNoK8sWithVersion 0.36
249 TestStoppedBinaryUpgrade/Setup 0.65
250 TestStoppedBinaryUpgrade/Upgrade 884.81
269 TestStoppedBinaryUpgrade/MinikubeLogs 9.57
x
+
TestDownloadOnly/v1.20.0/json-events (17.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-261600 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-261600 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (17.3927986s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (17.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1028 10:43:46.929372    9608 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1028 10:43:47.013042    9608 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-261600
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-261600: exit status 85 (524.5831ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-261600 | minikube6\jenkins | v1.34.0 | 28 Oct 24 10:43 UTC |          |
	|         | -p download-only-261600        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 10:43:29
	Running on machine: minikube6
	Binary: Built with gc go1.23.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 10:43:29.654789    2992 out.go:345] Setting OutFile to fd 744 ...
	I1028 10:43:29.735637    2992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:43:29.735637    2992 out.go:358] Setting ErrFile to fd 748...
	I1028 10:43:29.735637    2992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 10:43:29.751722    2992 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1028 10:43:29.763539    2992 out.go:352] Setting JSON to true
	I1028 10:43:29.766719    2992 start.go:129] hostinfo: {"hostname":"minikube6","uptime":160034,"bootTime":1729952174,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W1028 10:43:29.767815    2992 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 10:43:29.782989    2992 out.go:97] [download-only-261600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	W1028 10:43:29.783557    2992 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1028 10:43:29.783557    2992 notify.go:220] Checking for updates...
	I1028 10:43:29.786546    2992 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 10:43:29.789234    2992 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I1028 10:43:29.791852    2992 out.go:169] MINIKUBE_LOCATION=19876
	I1028 10:43:29.794499    2992 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1028 10:43:29.799607    2992 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 10:43:29.800377    2992 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 10:43:35.750129    2992 out.go:97] Using the hyperv driver based on user configuration
	I1028 10:43:35.750129    2992 start.go:297] selected driver: hyperv
	I1028 10:43:35.750129    2992 start.go:901] validating driver "hyperv" against <nil>
	I1028 10:43:35.750905    2992 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 10:43:35.833025    2992 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I1028 10:43:35.833994    2992 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 10:43:35.833994    2992 cni.go:84] Creating CNI manager for ""
	I1028 10:43:35.833994    2992 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1028 10:43:35.833994    2992 start.go:340] cluster config:
	{Name:download-only-261600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-261600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:43:35.834994    2992 iso.go:125] acquiring lock: {Name:mk92685f18db3b9b8a4c28d2a00efac35439147d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 10:43:35.842052    2992 out.go:97] Downloading VM boot image ...
	I1028 10:43:35.842052    2992 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 10:43:39.705767    2992 out.go:97] Starting "download-only-261600" primary control-plane node in "download-only-261600" cluster
	I1028 10:43:39.706628    2992 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 10:43:39.749639    2992 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I1028 10:43:39.749639    2992 cache.go:56] Caching tarball of preloaded images
	I1028 10:43:39.750424    2992 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 10:43:39.754246    2992 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1028 10:43:39.754246    2992 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I1028 10:43:39.825860    2992 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I1028 10:43:43.244141    2992 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I1028 10:43:43.244978    2992 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I1028 10:43:44.311627    2992 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1028 10:43:44.311962    2992 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-261600\config.json ...
	I1028 10:43:44.312772    2992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-261600\config.json: {Name:mkcd6a9ee5cb76fe15aa114b2bf60a7bb1cf7eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:43:44.314300    2992 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 10:43:44.315640    2992 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-261600 host does not exist
	  To start a cluster, run: "minikube start -p download-only-261600"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-261600
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-261600: (1.1030532s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (10.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-582200 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-582200 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=hyperv: (10.9621127s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (10.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1028 10:44:00.558701    9608 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1028 10:44:00.558701    9608 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
--- PASS: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-582200
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-582200: exit status 85 (308.2138ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-261600 | minikube6\jenkins | v1.34.0 | 28 Oct 24 10:43 UTC |                     |
	|         | -p download-only-261600        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.34.0 | 28 Oct 24 10:43 UTC | 28 Oct 24 10:43 UTC |
	| delete  | -p download-only-261600        | download-only-261600 | minikube6\jenkins | v1.34.0 | 28 Oct 24 10:43 UTC | 28 Oct 24 10:43 UTC |
	| start   | -o=json --download-only        | download-only-582200 | minikube6\jenkins | v1.34.0 | 28 Oct 24 10:43 UTC |                     |
	|         | -p download-only-582200        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 10:43:49
	Running on machine: minikube6
	Binary: Built with gc go1.23.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 10:43:49.726884   12536 out.go:345] Setting OutFile to fd 740 ...
	I1028 10:43:49.801438   12536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:43:49.801438   12536 out.go:358] Setting ErrFile to fd 700...
	I1028 10:43:49.801438   12536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:43:49.825443   12536 out.go:352] Setting JSON to true
	I1028 10:43:49.828437   12536 start.go:129] hostinfo: {"hostname":"minikube6","uptime":160054,"bootTime":1729952174,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W1028 10:43:49.828437   12536 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 10:43:49.834436   12536 out.go:97] [download-only-582200] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1028 10:43:49.834436   12536 notify.go:220] Checking for updates...
	I1028 10:43:49.837428   12536 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 10:43:49.839440   12536 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I1028 10:43:49.843435   12536 out.go:169] MINIKUBE_LOCATION=19876
	I1028 10:43:49.846436   12536 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1028 10:43:49.851428   12536 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 10:43:49.852437   12536 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 10:43:55.705621   12536 out.go:97] Using the hyperv driver based on user configuration
	I1028 10:43:55.706002   12536 start.go:297] selected driver: hyperv
	I1028 10:43:55.706234   12536 start.go:901] validating driver "hyperv" against <nil>
	I1028 10:43:55.706451   12536 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 10:43:55.759075   12536 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I1028 10:43:55.761103   12536 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 10:43:55.761352   12536 cni.go:84] Creating CNI manager for ""
	I1028 10:43:55.761352   12536 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 10:43:55.761352   12536 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 10:43:55.761352   12536 start.go:340] cluster config:
	{Name:download-only-582200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-582200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:43:55.762035   12536 iso.go:125] acquiring lock: {Name:mk92685f18db3b9b8a4c28d2a00efac35439147d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 10:43:55.765162   12536 out.go:97] Starting "download-only-582200" primary control-plane node in "download-only-582200" cluster
	I1028 10:43:55.765679   12536 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 10:43:55.810493   12536 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1028 10:43:55.810679   12536 cache.go:56] Caching tarball of preloaded images
	I1028 10:43:55.811270   12536 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 10:43:55.814447   12536 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1028 10:43:55.814447   12536 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 ...
	I1028 10:43:55.881768   12536 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4?checksum=md5:979f32540b837894423b337fec69fbf6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1028 10:43:58.594718   12536 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 ...
	I1028 10:43:58.595682   12536 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 ...
	I1028 10:43:59.487159   12536 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 10:43:59.487594   12536 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-582200\config.json ...
	I1028 10:43:59.487594   12536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-582200\config.json: {Name:mke5ea58106ebf3a2f64846039867109d5dec2b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:43:59.488491   12536 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 10:43:59.489506   12536 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.31.2/kubectl.exe
	
	
	* The control-plane node download-only-582200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-582200"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (1.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-582200
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-582200: (1.0401965s)
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (1.04s)

                                                
                                    
x
+
TestBinaryMirror (7.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1028 10:44:04.880667    9608 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-571400 --alsologtostderr --binary-mirror http://127.0.0.1:56453 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-571400 --alsologtostderr --binary-mirror http://127.0.0.1:56453 --driver=hyperv: (6.8567671s)
helpers_test.go:175: Cleaning up "binary-mirror-571400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-571400
--- PASS: TestBinaryMirror (7.62s)

                                                
                                    
x
+
TestOffline (433.44s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-767100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-767100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (6m31.6203214s)
helpers_test.go:175: Cleaning up "offline-docker-767100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-767100
E1028 13:14:42.845851    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-767100: (41.8204033s)
--- PASS: TestOffline (433.44s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.31s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-292500
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-292500: exit status 85 (308.3631ms)

                                                
                                                
-- stdout --
	* Profile "addons-292500" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-292500"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.31s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.31s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-292500
addons_test.go:950: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-292500: exit status 85 (307.1553ms)

                                                
                                                
-- stdout --
	* Profile "addons-292500" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-292500"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.31s)

                                                
                                    
x
+
TestAddons/Setup (446.84s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-292500 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-292500 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (7m26.8429731s)
--- PASS: TestAddons/Setup (446.84s)

                                                
                                    
x
+
TestAddons/serial/Volcano (66.66s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:815: volcano-admission stabilized in 27.8468ms
addons_test.go:823: volcano-controller stabilized in 27.8468ms
addons_test.go:807: volcano-scheduler stabilized in 27.8468ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-4cj4v" [26d37eee-6d53-4ef4-a949-bbe9c2996ff0] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0057626s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-sx7xj" [98a08839-0b23-46b6-b28c-b6285d3fe4c8] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0078219s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-6dg42" [0b75380d-9583-4527-ae31-0b79dbd7e330] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0085867s
addons_test.go:842: (dbg) Run:  kubectl --context addons-292500 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-292500 create -f testdata\vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-292500 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [7cea4373-4442-409e-b031-9b2d82d8c9f0] Pending
helpers_test.go:344: "test-job-nginx-0" [7cea4373-4442-409e-b031-9b2d82d8c9f0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [7cea4373-4442-409e-b031-9b2d82d8c9f0] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 23.0138172s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-292500 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-292500 addons disable volcano --alsologtostderr -v=1: (26.7236976s)
--- PASS: TestAddons/serial/Volcano (66.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.37s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-292500 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-292500 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.72s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-292500 create -f testdata\busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-292500 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0ab1dc04-bacc-4cdc-8709-ee5557370b6a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0ab1dc04-bacc-4cdc-8709-ee5557370b6a] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.0061041s
addons_test.go:633: (dbg) Run:  kubectl --context addons-292500 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-292500 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-292500 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-292500 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.72s)

                                                
                                    
x
+
TestAddons/parallel/Registry (36.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.0095ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-jp62q" [b6b7a617-27a4-4180-ae78-c0c06a37a503] Running
I1028 10:53:28.380867    9608 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1028 10:53:28.380867    9608 kapi.go:107] duration metric: took 14.9952ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0077573s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vtqtr" [62084904-4999-4043-8254-52fd1c448738] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0047478s
addons_test.go:331: (dbg) Run:  kubectl --context addons-292500 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-292500 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-292500 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.6727793s)
addons_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-292500 ip
addons_test.go:350: (dbg) Done: out/minikube-windows-amd64.exe -p addons-292500 ip: (2.923903s)
2024/10/28 10:53:48 [DEBUG] GET http://172.27.247.30:5000
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-292500 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-292500 addons disable registry --alsologtostderr -v=1: (16.3071068s)
--- PASS: TestAddons/parallel/Registry (36.14s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (63.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-292500 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-292500 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-292500 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ddac30aa-43cf-457b-bdbc-052b3f1b1ce5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ddac30aa-43cf-457b-bdbc-052b3f1b1ce5] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.0062047s
I1028 10:55:04.752838    9608 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-292500 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-292500 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.0581704s)
addons_test.go:286: (dbg) Run:  kubectl --context addons-292500 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-292500 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-292500 ip: (2.6444069s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.27.247.30
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-292500 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-292500 addons disable ingress-dns --alsologtostderr -v=1: (16.5886964s)
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-292500 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-292500 addons disable ingress --alsologtostderr -v=1: (22.6842112s)
--- PASS: TestAddons/parallel/Ingress (63.22s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (27.56s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jzdpv" [679fa216-40bc-44c8-8107-dcedfbb0a1a2] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0078573s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-292500 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-292500 addons disable inspektor-gadget --alsologtostderr -v=1: (21.5450775s)
--- PASS: TestAddons/parallel/InspektorGadget (27.56s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (22.4s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 6.1694ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-w5pgb" [b5a7d553-d4e4-406a-a733-e507890f4396] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00665s
addons_test.go:402: (dbg) Run:  kubectl --context addons-292500 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-292500 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-292500 addons disable metrics-server --alsologtostderr -v=1: (16.1452752s)
--- PASS: TestAddons/parallel/MetricsServer (22.40s)

                                                
                                    
x
+
TestAddons/parallel/CSI (102.1s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 14.9952ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-292500 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-292500 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6cc2aae1-3e2e-409c-9c23-2b66b8a755a8] Pending
helpers_test.go:344: "task-pv-pod" [6cc2aae1-3e2e-409c-9c23-2b66b8a755a8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6cc2aae1-3e2e-409c-9c23-2b66b8a755a8] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.0063532s
addons_test.go:511: (dbg) Run:  kubectl --context addons-292500 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-292500 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-292500 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-292500 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-292500 delete pod task-pv-pod: (1.1814536s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-292500 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-292500 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-292500 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [41a90a4f-433c-450e-9101-ae30783ac76a] Pending
helpers_test.go:344: "task-pv-pod-restore" [41a90a4f-433c-450e-9101-ae30783ac76a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [41a90a4f-433c-450e-9101-ae30783ac76a] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0069741s
addons_test.go:553: (dbg) Run:  kubectl --context addons-292500 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-292500 delete pod task-pv-pod-restore: (1.876827s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-292500 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-292500 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-292500 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-292500 addons disable volumesnapshots --alsologtostderr -v=1: (16.7643653s)
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-292500 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-292500 addons disable csi-hostpath-driver --alsologtostderr -v=1: (22.7952809s)
--- PASS: TestAddons/parallel/CSI (102.10s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (43.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-292500 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-292500 --alsologtostderr -v=1: (16.9918012s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-7bs8j" [19e8b971-fb45-439a-bbec-8abc60571ab9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-7bs8j" [19e8b971-fb45-439a-bbec-8abc60571ab9] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 19.0073339s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-292500 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-292500 addons disable headlamp --alsologtostderr -v=1: (7.9399184s)
--- PASS: TestAddons/parallel/Headlamp (43.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (20.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-6ldbn" [7ec32ea5-61d4-4b0c-a31d-fb2b064a6c01] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006981s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-292500 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-292500 addons disable cloud-spanner --alsologtostderr -v=1: (15.931966s)
--- PASS: TestAddons/parallel/CloudSpanner (20.95s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (86.43s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-292500 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-292500 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-292500 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [13231411-cd74-433e-b814-bb2484c798f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [13231411-cd74-433e-b814-bb2484c798f2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [13231411-cd74-433e-b814-bb2484c798f2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0060695s
addons_test.go:906: (dbg) Run:  kubectl --context addons-292500 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-292500 ssh "cat /opt/local-path-provisioner/pvc-ce75f924-daee-4590-8e18-8bc4ddee0faa_default_test-pvc/file1"
addons_test.go:915: (dbg) Done: out/minikube-windows-amd64.exe -p addons-292500 ssh "cat /opt/local-path-provisioner/pvc-ce75f924-daee-4590-8e18-8bc4ddee0faa_default_test-pvc/file1": (10.6664646s)
addons_test.go:927: (dbg) Run:  kubectl --context addons-292500 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-292500 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-292500 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-292500 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m2.0122447s)
--- PASS: TestAddons/parallel/LocalPath (86.43s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (22.07s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zxbrl" [9d73d60e-569a-4806-8753-9343c86c589f] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0089927s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-292500 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-292500 addons disable nvidia-device-plugin --alsologtostderr -v=1: (16.0585657s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (22.07s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (27.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
I1028 10:53:28.365871    9608 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-cwn79" [a0251162-3d7a-40cb-96fc-dbce2c12613a] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0129532s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-292500 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-292500 addons disable yakd --alsologtostderr -v=1: (21.605332s)
--- PASS: TestAddons/parallel/Yakd (27.62s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (55.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-292500
addons_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-292500: (42.0357003s)
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-292500
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-292500: (5.2110189s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-292500
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-292500: (4.9735045s)
addons_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-292500
addons_test.go:183: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-292500: (2.9567657s)
--- PASS: TestAddons/StoppedEnableDisable (55.18s)

                                                
                                    
x
+
TestCertOptions (586.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-446800 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
E1028 13:26:39.761526    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-446800 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (8m37.5403232s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-446800 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-446800 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.5244153s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-446800 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-446800 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-446800 -- "sudo cat /etc/kubernetes/admin.conf": (10.6465582s)
helpers_test.go:175: Cleaning up "cert-options-446800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-446800
E1028 13:34:45.613623    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-446800: (47.2729373s)
--- PASS: TestCertOptions (586.16s)

                                                
                                    
x
+
TestCertExpiration (925.04s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-817100 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-817100 --memory=2048 --cert-expiration=3m --driver=hyperv: (5m45.5574623s)
E1028 13:29:45.610642    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 13:31:22.859901    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 13:31:39.765303    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-817100 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-817100 --memory=2048 --cert-expiration=8760h --driver=hyperv: (5m52.905453s)
helpers_test.go:175: Cleaning up "cert-expiration-817100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-817100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-817100: (46.5740679s)
--- PASS: TestCertExpiration (925.04s)

                                                
                                    
x
+
TestDockerFlags (432.34s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-175000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-175000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (6m9.7492068s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-175000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-175000 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.7509256s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-175000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-175000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.6182114s)
helpers_test.go:175: Cleaning up "docker-flags-175000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-175000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-175000: (41.2211166s)
--- PASS: TestDockerFlags (432.34s)

                                                
                                    
x
+
TestForceSystemdFlag (570.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-430000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-430000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (8m38.3328803s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-430000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-430000 ssh "docker info --format {{.CgroupDriver}}": (10.6202221s)
helpers_test.go:175: Cleaning up "force-systemd-flag-430000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-430000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-430000: (41.2336458s)
--- PASS: TestForceSystemdFlag (570.19s)

                                                
                                    
x
+
TestForceSystemdEnv (521.11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-448800 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-448800 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (7m40.908514s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-448800 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-448800 ssh "docker info --format {{.CgroupDriver}}": (10.8085462s)
helpers_test.go:175: Cleaning up "force-systemd-env-448800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-448800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-448800: (49.3941794s)
--- PASS: TestForceSystemdEnv (521.11s)

                                                
                                    
x
+
TestErrorSpam/start (18.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 start --dry-run: (6.1223382s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 start --dry-run: (6.3368018s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 start --dry-run: (6.31766s)
--- PASS: TestErrorSpam/start (18.78s)

                                                
                                    
x
+
TestErrorSpam/status (38.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 status: (13.3884087s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 status
E1028 11:01:39.662481    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:01:39.670001    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:01:39.681731    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:01:39.704233    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:01:39.746190    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:01:39.828347    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:01:39.990898    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 status: (12.7507601s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 status
E1028 11:01:40.313119    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:01:40.955343    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:01:42.238410    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:01:44.800645    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:01:49.922662    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 status: (12.7161644s)
--- PASS: TestErrorSpam/status (38.86s)

                                                
                                    
x
+
TestErrorSpam/pause (24.23s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 pause
E1028 11:02:00.165076    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 pause: (8.329989s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 pause: (7.962945s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 pause: (7.9318042s)
--- PASS: TestErrorSpam/pause (24.23s)

                                                
                                    
x
+
TestErrorSpam/unpause (24.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 unpause
E1028 11:02:20.647689    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 unpause: (8.3331359s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 unpause: (8.1100465s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 unpause: (8.1572933s)
--- PASS: TestErrorSpam/unpause (24.61s)

                                                
                                    
x
+
TestErrorSpam/stop (58.58s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 stop
E1028 11:03:01.611125    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 stop: (35.6000692s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 stop: (11.5858687s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-046700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-046700 stop: (11.3926022s)
--- PASS: TestErrorSpam/stop (58.58s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9608\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (206.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-150200 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E1028 11:04:23.533936    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:06:39.665978    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:07:07.377954    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-150200 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m26.0521122s)
--- PASS: TestFunctional/serial/StartWithProxy (206.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (135.95s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1028 11:07:23.737228    9608 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-150200 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-150200 --alsologtostderr -v=8: (2m15.9517522s)
functional_test.go:663: soft start took 2m15.9538743s for "functional-150200" cluster.
I1028 11:09:39.692371    9608 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (135.95s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.15s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-150200 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (28.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 cache add registry.k8s.io/pause:3.1: (9.5948073s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 cache add registry.k8s.io/pause:3.3: (9.3534282s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 cache add registry.k8s.io/pause:latest: (9.2144503s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (28.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (10.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-150200 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2946359741\001
functional_test.go:1077: (dbg) Done: docker build -t minikube-local-cache-test:functional-150200 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2946359741\001: (1.8922704s)
functional_test.go:1089: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 cache add minikube-local-cache-test:functional-150200
functional_test.go:1089: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 cache add minikube-local-cache-test:functional-150200: (8.6622669s)
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 cache delete minikube-local-cache-test:functional-150200
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-150200
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (10.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh sudo crictl images
functional_test.go:1124: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 ssh sudo crictl images: (9.8255648s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (38.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.9431813s)
functional_test.go:1153: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-150200 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.8524936s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 cache reload: (8.5850985s)
functional_test.go:1163: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.9874101s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (38.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.62s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 kubectl -- --context functional-150200 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (132.92s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-150200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-150200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m12.9198412s)
functional_test.go:761: restart took 2m12.9201734s for "functional-150200" cluster.
I1028 11:13:58.506855    9608 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (132.92s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-150200 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (9.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 logs
functional_test.go:1236: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 logs: (9.0793398s)
--- PASS: TestFunctional/serial/LogsCmd (9.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (11.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1336193954\001\logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1336193954\001\logs.txt: (11.3127872s)
--- PASS: TestFunctional/serial/LogsFileCmd (11.32s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (21.82s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-150200 apply -f testdata\invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-150200
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-150200: exit status 115 (17.7286595s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://172.27.250.220:31929 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_service_8fb87d8e79e761d215f3221b4a4d8a6300edfb06_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-150200 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (21.82s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-150200 config get cpus: exit status 14 (315.3014ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-150200 config get cpus: exit status 14 (295.2182ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (45.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 status
E1028 11:16:39.673118    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:854: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 status: (15.1239776s)
functional_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (15.0142675s)
functional_test.go:872: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 status -o json
functional_test.go:872: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 status -o json: (15.5151326s)
--- PASS: TestFunctional/parallel/StatusCmd (45.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (27.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-150200 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-150200 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-gtr8n" [4af8a3e2-b5cd-4cec-ba2f-621c6bf10561] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-gtr8n" [4af8a3e2-b5cd-4cec-ba2f-621c6bf10561] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.00476s
functional_test.go:1649: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 service hello-node-connect --url
functional_test.go:1649: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 service hello-node-connect --url: (19.3108076s)
functional_test.go:1655: found endpoint for hello-node-connect: http://172.27.250.220:31960
functional_test.go:1675: http://172.27.250.220:31960: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-gtr8n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.27.250.220:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.27.250.220:31960
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (27.86s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [11f21928-6ded-4c06-ba52-2f346f9fb8b4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0078935s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-150200 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-150200 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-150200 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-150200 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6af007e5-0fe1-4554-98bc-640a00c31ddc] Pending
helpers_test.go:344: "sp-pod" [6af007e5-0fe1-4554-98bc-640a00c31ddc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6af007e5-0fe1-4554-98bc-640a00c31ddc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.00872s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-150200 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-150200 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-150200 delete -f testdata/storage-provisioner/pod.yaml: (1.0861723s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-150200 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [56994a78-39c5-427b-adb6-90d48747bdc9] Pending
helpers_test.go:344: "sp-pod" [56994a78-39c5-427b-adb6-90d48747bdc9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [56994a78-39c5-427b-adb6-90d48747bdc9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0080747s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-150200 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.60s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (21.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh "echo hello"
functional_test.go:1725: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 ssh "echo hello": (10.4546869s)
functional_test.go:1742: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 ssh "cat /etc/hostname": (11.2741204s)
--- PASS: TestFunctional/parallel/SSHCmd (21.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (63.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 cp testdata\cp-test.txt /home/docker/cp-test.txt: (9.362624s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh -n functional-150200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 ssh -n functional-150200 "sudo cat /home/docker/cp-test.txt": (10.5618045s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 cp functional-150200:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd90970988\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 cp functional-150200:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd90970988\001\cp-test.txt: (11.0324776s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh -n functional-150200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 ssh -n functional-150200 "sudo cat /home/docker/cp-test.txt": (12.2430595s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (9.5924471s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh -n functional-150200 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 ssh -n functional-150200 "sudo cat /tmp/does/not/exist/cp-test.txt": (11.0632283s)
--- PASS: TestFunctional/parallel/CpCmd (63.86s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (78.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-150200 replace --force -f testdata\mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-px7s8" [2cc3c5c1-7014-423c-b380-1158cb6e5c57] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-px7s8" [2cc3c5c1-7014-423c-b380-1158cb6e5c57] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 52.0067702s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-150200 exec mysql-6cdb49bbb-px7s8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-150200 exec mysql-6cdb49bbb-px7s8 -- mysql -ppassword -e "show databases;": exit status 1 (298.3124ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 11:18:16.396846    9608 retry.go:31] will retry after 1.312028803s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-150200 exec mysql-6cdb49bbb-px7s8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-150200 exec mysql-6cdb49bbb-px7s8 -- mysql -ppassword -e "show databases;": exit status 1 (297.0077ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 11:18:18.016443    9608 retry.go:31] will retry after 1.83483117s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-150200 exec mysql-6cdb49bbb-px7s8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-150200 exec mysql-6cdb49bbb-px7s8 -- mysql -ppassword -e "show databases;": exit status 1 (291.2818ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 11:18:20.154921    9608 retry.go:31] will retry after 2.050166058s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-150200 exec mysql-6cdb49bbb-px7s8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-150200 exec mysql-6cdb49bbb-px7s8 -- mysql -ppassword -e "show databases;": exit status 1 (343.2555ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 11:18:22.563033    9608 retry.go:31] will retry after 4.786759697s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-150200 exec mysql-6cdb49bbb-px7s8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-150200 exec mysql-6cdb49bbb-px7s8 -- mysql -ppassword -e "show databases;": exit status 1 (315.6821ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 11:18:27.677391    9608 retry.go:31] will retry after 3.115563385s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-150200 exec mysql-6cdb49bbb-px7s8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-150200 exec mysql-6cdb49bbb-px7s8 -- mysql -ppassword -e "show databases;": exit status 1 (311.2503ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 11:18:31.116653    9608 retry.go:31] will retry after 10.181030652s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-150200 exec mysql-6cdb49bbb-px7s8 -- mysql -ppassword -e "show databases;"
E1028 11:21:39.676640    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/parallel/MySQL (78.15s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (11.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/9608/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh "sudo cat /etc/test/nested/copy/9608/hosts"
functional_test.go:1931: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 ssh "sudo cat /etc/test/nested/copy/9608/hosts": (11.1083243s)
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (11.11s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (69.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/9608.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh "sudo cat /etc/ssl/certs/9608.pem"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 ssh "sudo cat /etc/ssl/certs/9608.pem": (11.2389612s)
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/9608.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh "sudo cat /usr/share/ca-certificates/9608.pem"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 ssh "sudo cat /usr/share/ca-certificates/9608.pem": (11.9918321s)
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 ssh "sudo cat /etc/ssl/certs/51391683.0": (11.9491681s)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/96082.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh "sudo cat /etc/ssl/certs/96082.pem"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 ssh "sudo cat /etc/ssl/certs/96082.pem": (11.5525317s)
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/96082.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh "sudo cat /usr/share/ca-certificates/96082.pem"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 ssh "sudo cat /usr/share/ca-certificates/96082.pem": (11.3244241s)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (11.5224004s)
--- PASS: TestFunctional/parallel/CertSync (69.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-150200 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (11.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-150200 ssh "sudo systemctl is-active crio": exit status 1 (11.6908375s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (11.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2288: (dbg) Done: out/minikube-windows-amd64.exe license: (3.7305673s)
--- PASS: TestFunctional/parallel/License (3.76s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 version --short
--- PASS: TestFunctional/parallel/Version/short (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 version -o=json --components: (8.4431595s)
--- PASS: TestFunctional/parallel/Version/components (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (8.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image ls --format short --alsologtostderr: (8.114203s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-150200 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-150200
docker.io/kicbase/echo-server:functional-150200
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-150200 image ls --format short --alsologtostderr:
I1028 11:17:43.069206    7492 out.go:345] Setting OutFile to fd 1880 ...
I1028 11:17:43.160278    7492 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:17:43.160278    7492 out.go:358] Setting ErrFile to fd 1884...
I1028 11:17:43.160278    7492 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:17:43.183002    7492 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:17:43.183002    7492 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:17:43.184145    7492 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
I1028 11:17:45.526358    7492 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1028 11:17:45.526358    7492 main.go:141] libmachine: [stderr =====>] : 
I1028 11:17:45.541564    7492 ssh_runner.go:195] Run: systemctl --version
I1028 11:17:45.541564    7492 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
I1028 11:17:48.007403    7492 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1028 11:17:48.007801    7492 main.go:141] libmachine: [stderr =====>] : 
I1028 11:17:48.007801    7492 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
I1028 11:17:50.818406    7492 main.go:141] libmachine: [stdout =====>] : 172.27.250.220

                                                
                                                
I1028 11:17:50.818573    7492 main.go:141] libmachine: [stderr =====>] : 
I1028 11:17:50.819574    7492 sshutil.go:53] new ssh client: &{IP:172.27.250.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-150200\id_rsa Username:docker}
I1028 11:17:50.953102    7492 ssh_runner.go:235] Completed: systemctl --version: (5.4114762s)
I1028 11:17:50.964238    7492 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (8.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (8.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image ls --format table --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image ls --format table --alsologtostderr: (8.0342911s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-150200 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kicbase/echo-server               | functional-150200 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-150200 | 62491711e5f5e | 30B    |
| registry.k8s.io/kube-proxy                  | v1.31.2           | 505d571f5fd56 | 91.5MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-scheduler              | v1.31.2           | 847c7bc1a5418 | 67.4MB |
| registry.k8s.io/kube-controller-manager     | v1.31.2           | 0486b6c53a1b5 | 88.4MB |
| docker.io/library/nginx                     | alpine            | cb8f91112b6b5 | 47MB   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.31.2           | 9499c9960544e | 94.2MB |
| docker.io/library/nginx                     | latest            | 3b25b682ea82b | 192MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-150200 image ls --format table --alsologtostderr:
I1028 11:18:03.128233    4000 out.go:345] Setting OutFile to fd 1864 ...
I1028 11:18:03.210535    4000 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:18:03.210535    4000 out.go:358] Setting ErrFile to fd 1036...
I1028 11:18:03.210535    4000 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:18:03.230927    4000 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:18:03.230927    4000 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:18:03.232645    4000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
I1028 11:18:05.578869    4000 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1028 11:18:05.578869    4000 main.go:141] libmachine: [stderr =====>] : 
I1028 11:18:05.592005    4000 ssh_runner.go:195] Run: systemctl --version
I1028 11:18:05.592005    4000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
I1028 11:18:07.978280    4000 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1028 11:18:07.978280    4000 main.go:141] libmachine: [stderr =====>] : 
I1028 11:18:07.978397    4000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
I1028 11:18:10.778802    4000 main.go:141] libmachine: [stdout =====>] : 172.27.250.220

                                                
                                                
I1028 11:18:10.779218    4000 main.go:141] libmachine: [stderr =====>] : 
I1028 11:18:10.780036    4000 sshutil.go:53] new ssh client: &{IP:172.27.250.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-150200\id_rsa Username:docker}
I1028 11:18:10.888588    4000 ssh_runner.go:235] Completed: systemctl --version: (5.296523s)
I1028 11:18:10.898909    4000 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (8.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image ls --format json --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image ls --format json --alsologtostderr: (7.9606733s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-150200 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-150200"],"size":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"67400000"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"91500000"},{"id":"62491711e5f5e3145b030f5ac627e68cd16773af617d0806eb9f8a2f93295e58","r
epoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-150200"],"size":"30"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"94200000"},{"id":"3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"88400000"},{"id":"cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045","repoDigests":[],"repoTags":["docker.io/libra
ry/nginx:alpine"],"size":"47000000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-150200 image ls --format json --alsologtostderr:
I1028 11:17:55.149073    1724 out.go:345] Setting OutFile to fd 1396 ...
I1028 11:17:55.246089    1724 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:17:55.246132    1724 out.go:358] Setting ErrFile to fd 720...
I1028 11:17:55.246132    1724 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:17:55.263267    1724 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:17:55.263267    1724 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:17:55.264260    1724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
I1028 11:17:57.611490    1724 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1028 11:17:57.611490    1724 main.go:141] libmachine: [stderr =====>] : 
I1028 11:17:57.625184    1724 ssh_runner.go:195] Run: systemctl --version
I1028 11:17:57.625305    1724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
I1028 11:17:59.977883    1724 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1028 11:17:59.977883    1724 main.go:141] libmachine: [stderr =====>] : 
I1028 11:17:59.977883    1724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
I1028 11:18:02.768021    1724 main.go:141] libmachine: [stdout =====>] : 172.27.250.220

                                                
                                                
I1028 11:18:02.768126    1724 main.go:141] libmachine: [stderr =====>] : 
I1028 11:18:02.768657    1724 sshutil.go:53] new ssh client: &{IP:172.27.250.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-150200\id_rsa Username:docker}
I1028 11:18:02.882907    1724 ssh_runner.go:235] Completed: systemctl --version: (5.257601s)
I1028 11:18:02.892634    1724 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image ls --format yaml --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image ls --format yaml --alsologtostderr: (7.9713335s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-150200 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 62491711e5f5e3145b030f5ac627e68cd16773af617d0806eb9f8a2f93295e58
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-150200
size: "30"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "94200000"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "88400000"
- id: 3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-150200
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "67400000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "91500000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-150200 image ls --format yaml --alsologtostderr:
I1028 11:17:47.173702    2488 out.go:345] Setting OutFile to fd 1528 ...
I1028 11:17:47.284747    2488 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:17:47.284747    2488 out.go:358] Setting ErrFile to fd 1336...
I1028 11:17:47.285458    2488 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:17:47.305034    2488 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:17:47.305994    2488 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:17:47.307223    2488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
I1028 11:17:49.640886    2488 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1028 11:17:49.640997    2488 main.go:141] libmachine: [stderr =====>] : 
I1028 11:17:49.653150    2488 ssh_runner.go:195] Run: systemctl --version
I1028 11:17:49.653150    2488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
I1028 11:17:52.043022    2488 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1028 11:17:52.043022    2488 main.go:141] libmachine: [stderr =====>] : 
I1028 11:17:52.044100    2488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
I1028 11:17:54.779134    2488 main.go:141] libmachine: [stdout =====>] : 172.27.250.220

                                                
                                                
I1028 11:17:54.779319    2488 main.go:141] libmachine: [stderr =====>] : 
I1028 11:17:54.780095    2488 sshutil.go:53] new ssh client: &{IP:172.27.250.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-150200\id_rsa Username:docker}
I1028 11:17:54.894888    2488 ssh_runner.go:235] Completed: systemctl --version: (5.241564s)
I1028 11:17:54.909671    2488 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (36.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-150200 ssh pgrep buildkitd: exit status 1 (10.1973692s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image build -t localhost/my-image:functional-150200 testdata\build --alsologtostderr
E1028 11:18:02.748305    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:315: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image build -t localhost/my-image:functional-150200 testdata\build --alsologtostderr: (17.9847912s)
functional_test.go:323: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-150200 image build -t localhost/my-image:functional-150200 testdata\build --alsologtostderr:
I1028 11:18:01.387107   13336 out.go:345] Setting OutFile to fd 1156 ...
I1028 11:18:01.503731   13336 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:18:01.503731   13336 out.go:358] Setting ErrFile to fd 1144...
I1028 11:18:01.503842   13336 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:18:01.522204   13336 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:18:01.542940   13336 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:18:01.544195   13336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
I1028 11:18:03.851581   13336 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1028 11:18:03.852565   13336 main.go:141] libmachine: [stderr =====>] : 
I1028 11:18:03.867835   13336 ssh_runner.go:195] Run: systemctl --version
I1028 11:18:03.867835   13336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-150200 ).state
I1028 11:18:06.415294   13336 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1028 11:18:06.415294   13336 main.go:141] libmachine: [stderr =====>] : 
I1028 11:18:06.416288   13336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-150200 ).networkadapters[0]).ipaddresses[0]
I1028 11:18:09.246675   13336 main.go:141] libmachine: [stdout =====>] : 172.27.250.220

                                                
                                                
I1028 11:18:09.246675   13336 main.go:141] libmachine: [stderr =====>] : 
I1028 11:18:09.246915   13336 sshutil.go:53] new ssh client: &{IP:172.27.250.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-150200\id_rsa Username:docker}
I1028 11:18:09.356993   13336 ssh_runner.go:235] Completed: systemctl --version: (5.4890956s)
I1028 11:18:09.356993   13336 build_images.go:161] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2439688263.tar
I1028 11:18:09.374540   13336 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1028 11:18:09.405768   13336 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2439688263.tar
I1028 11:18:09.414577   13336 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2439688263.tar: stat -c "%s %y" /var/lib/minikube/build/build.2439688263.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2439688263.tar': No such file or directory
I1028 11:18:09.414577   13336 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2439688263.tar --> /var/lib/minikube/build/build.2439688263.tar (3072 bytes)
I1028 11:18:09.480054   13336 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2439688263
I1028 11:18:09.512787   13336 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2439688263 -xf /var/lib/minikube/build/build.2439688263.tar
I1028 11:18:09.531199   13336 docker.go:360] Building image: /var/lib/minikube/build/build.2439688263
I1028 11:18:09.541706   13336 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-150200 /var/lib/minikube/build/build.2439688263
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B 0.0s done
#1 DONE 0.2s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context:
#3 transferring context: 2B done
#3 DONE 0.7s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.2s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.0s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.3s done
#5 DONE 1.1s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 1.3s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 1.7s done
#8 writing image sha256:4ff91b413e3123fdaa07fcb4a687dbec6cc2a7dd0baa35cf351125648206a699
#8 writing image sha256:4ff91b413e3123fdaa07fcb4a687dbec6cc2a7dd0baa35cf351125648206a699 0.1s done
#8 naming to localhost/my-image:functional-150200 0.0s done
#8 DONE 1.9s
I1028 11:18:19.105651   13336 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-150200 /var/lib/minikube/build/build.2439688263: (9.5638369s)
I1028 11:18:19.118823   13336 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2439688263
I1028 11:18:19.157318   13336 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2439688263.tar
I1028 11:18:19.187997   13336 build_images.go:217] Built localhost/my-image:functional-150200 from C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2439688263.tar
I1028 11:18:19.187997   13336 build_images.go:133] succeeded building to: functional-150200
I1028 11:18:19.187997   13336 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image ls: (8.0262898s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (36.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.3171867s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-150200
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (18.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-150200 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-150200 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-nnszl" [f0e97c10-a5a6-4950-b5b8-59cc55fe8953] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-nnszl" [f0e97c10-a5a6-4950-b5b8-59cc55fe8953] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 18.0066035s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (18.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (19.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image load --daemon kicbase/echo-server:functional-150200 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image load --daemon kicbase/echo-server:functional-150200 --alsologtostderr: (10.9865035s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image ls: (8.2526036s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (19.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (15.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 service list
functional_test.go:1459: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 service list: (15.1811162s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (15.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (19.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image load --daemon kicbase/echo-server:functional-150200 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image load --daemon kicbase/echo-server:functional-150200 --alsologtostderr: (9.6219294s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image ls: (9.9014033s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (19.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (10.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-150200 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-150200 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-150200 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-150200 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2628: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 13552: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (10.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (15.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 service list -o json: (15.5950978s)
functional_test.go:1494: Took "15.5950978s" to run "out/minikube-windows-amd64.exe -p functional-150200 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (15.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (19.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-150200
functional_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image load --daemon kicbase/echo-server:functional-150200 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image load --daemon kicbase/echo-server:functional-150200 --alsologtostderr: (10.3929611s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image ls: (8.4000178s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (19.89s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-150200 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-150200 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a68d2f0d-fc26-434d-92cf-9899e36ba924] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a68d2f0d-fc26-434d-92cf-9899e36ba924] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.0099099s
I1028 11:15:40.411619    9608 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.82s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-150200 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 8228: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 8672: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image save kicbase/echo-server:functional-150200 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image save kicbase/echo-server:functional-150200 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (8.9704803s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (17.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image rm kicbase/echo-server:functional-150200 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image rm kicbase/echo-server:functional-150200 --alsologtostderr: (8.6898511s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image ls: (8.3222127s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (17.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (17.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (8.6489503s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image ls: (9.0865908s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (17.74s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (49.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:499: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-150200 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-150200"
functional_test.go:499: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-150200 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-150200": (32.9535634s)
functional_test.go:522: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-150200 docker-env | Invoke-Expression ; docker images"
functional_test.go:522: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-150200 docker-env | Invoke-Expression ; docker images": (16.8005567s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (49.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-150200
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 image save --daemon kicbase/echo-server:functional-150200 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 image save --daemon kicbase/echo-server:functional-150200 --alsologtostderr: (9.5138224s)
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-150200
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.76s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 update-context --alsologtostderr -v=2: (2.6897425s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 update-context --alsologtostderr -v=2: (2.8535391s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-150200 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-150200 update-context --alsologtostderr -v=2: (2.6847278s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (15.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1275: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (14.9901348s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (15.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (15.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1310: (dbg) Done: out/minikube-windows-amd64.exe profile list: (15.0668199s)
functional_test.go:1315: Took "15.0671748s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1329: Took "271.6248ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (15.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (14.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1361: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (13.7255427s)
functional_test.go:1366: Took "13.7264438s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1379: Took "322.667ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (14.05s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.22s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-150200
--- PASS: TestFunctional/delete_echo-server_images (0.22s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-150200
--- PASS: TestFunctional/delete_my-image_image (0.11s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-150200
--- PASS: TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (752.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-201400 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E1028 11:24:45.525923    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:24:45.533667    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:24:45.545369    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:24:45.567777    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:24:45.610764    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:24:45.693758    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:24:45.856460    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:24:46.178902    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:24:46.821911    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:24:48.104003    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:24:50.666610    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:24:55.789806    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:25:06.032292    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:25:26.514959    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:26:07.478867    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:26:39.680224    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:27:29.401227    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:29:45.529561    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:30:13.246197    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:31:39.683439    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:34:42.761245    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:34:45.532415    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-201400 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m53.4574642s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 status -v=7 --alsologtostderr: (38.8884334s)
--- PASS: TestMultiControlPlane/serial/StartCluster (752.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (14.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-201400 -- rollout status deployment/busybox: (4.6128318s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-b84wl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-b84wl -- nslookup kubernetes.io: (1.8766805s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-cvthb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-gp9fd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-gp9fd -- nslookup kubernetes.io: (1.826166s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-b84wl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-cvthb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-gp9fd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-b84wl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-cvthb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-201400 -- exec busybox-7dff88458-gp9fd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (14.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (277.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-201400 -v=7 --alsologtostderr
E1028 11:39:45.535883    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-201400 -v=7 --alsologtostderr: (3m46.4429031s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-201400 status -v=7 --alsologtostderr
E1028 11:41:08.616546    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:41:39.690948    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-201400 status -v=7 --alsologtostderr: (50.8822888s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (277.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-201400 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (50.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (50.8021928s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (50.80s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (200.38s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-843600 --driver=hyperv
E1028 11:56:39.699971    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:57:48.630894    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:59:45.549657    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-843600 --driver=hyperv: (3m20.3755178s)
--- PASS: TestImageBuild/serial/Setup (200.38s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-843600
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-843600: (10.4764356s)
--- PASS: TestImageBuild/serial/NormalBuild (10.48s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-843600
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-843600: (9.10423s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.10s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (8.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-843600
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-843600: (8.4730017s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (8.47s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.53s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-843600
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-843600: (8.5300991s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (238.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-746000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E1028 12:01:39.703793    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:04:45.553753    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-746000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m58.4133908s)
--- PASS: TestJSONOutput/start/Command (238.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.09s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.22s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-746000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-746000 --output=json --user=testUser: (8.2239297s)
--- PASS: TestJSONOutput/pause/Command (8.22s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0.03s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.03s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (8.08s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-746000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-746000 --output=json --user=testUser: (8.0800129s)
--- PASS: TestJSONOutput/unpause/Command (8.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.05s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (39.56s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-746000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-746000 --output=json --user=testUser: (39.5611245s)
--- PASS: TestJSONOutput/stop/Command (39.56s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.09s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-602100 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-602100 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (303.3212ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9e7c2532-ca18-45ad-b7d6-eb313cc16f66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-602100] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f4e873cd-c5da-4799-9815-e5a05e02a6bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"8ea3bdcb-1b02-4ebd-86c6-75bb536add54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"023b847c-d6df-4338-b75b-b145dad8257b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"ac163e28-8a5a-4021-9ea2-31c3b48f9af0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19876"}}
	{"specversion":"1.0","id":"49511931-42d9-4e01-a261-291f45a7373e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"494fa72d-8274-4d69-ac6a-6c98797be8f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-602100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-602100
--- PASS: TestErrorJSONOutput (1.09s)

                                                
                                    
x
+
TestMainNoArgs (0.28s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.28s)

                                                
                                    
x
+
TestMinikubeProfile (546.52s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-840600 --driver=hyperv
E1028 12:06:39.707480    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:08:02.789737    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:09:45.555526    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-840600 --driver=hyperv: (3m22.1639156s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-840600 --driver=hyperv
E1028 12:11:39.711285    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-840600 --driver=hyperv: (3m21.6333153s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-840600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (24.6938475s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-840600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (24.8788287s)
helpers_test.go:175: Cleaning up "second-840600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-840600
E1028 12:14:28.644585    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-840600: (43.898276s)
helpers_test.go:175: Cleaning up "first-840600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-840600
E1028 12:14:45.559220    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-840600: (48.5209335s)
--- PASS: TestMinikubeProfile (546.52s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (164.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-863700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E1028 12:16:39.713970    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-863700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m43.3874811s)
--- PASS: TestMountStart/serial/StartWithMountFirst (164.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.91s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-863700 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-863700 ssh -- ls /minikube-host: (9.908s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (159.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-863700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E1028 12:19:45.562417    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-863700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m38.9606683s)
--- PASS: TestMountStart/serial/StartWithMountSecond (159.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.76s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-863700 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-863700 ssh -- ls /minikube-host: (9.7636851s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.76s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (31.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-863700 --alsologtostderr -v=5
E1028 12:21:39.717371    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-863700 --alsologtostderr -v=5: (31.7250526s)
--- PASS: TestMountStart/serial/DeleteFirst (31.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (10.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-863700 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-863700 ssh -- ls /minikube-host: (10.311299s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (10.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (27.75s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-863700
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-863700: (27.7487296s)
--- PASS: TestMountStart/serial/Stop (27.75s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (123.85s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-863700
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-863700: (2m2.8483417s)
--- PASS: TestMountStart/serial/RestartStopped (123.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.95s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-863700 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-863700 ssh -- ls /minikube-host: (9.9520118s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.95s)

                                                
                                    
x
+
TestPreload (544.65s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-703400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E1028 12:54:45.587219    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:56:39.741484    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-703400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m38.8106285s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-703400 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-703400 image pull gcr.io/k8s-minikube/busybox: (9.2500332s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-703400
E1028 12:58:02.831775    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-703400: (41.2357226s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-703400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E1028 12:59:45.590637    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-703400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m44.6367994s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-703400 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-703400 image list: (7.6339338s)
helpers_test.go:175: Cleaning up "test-preload-703400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-703400
E1028 13:01:39.743863    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-703400: (43.0755163s)
--- PASS: TestPreload (544.65s)

                                                
                                    
x
+
TestScheduledStopWindows (340.98s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-493500 --memory=2048 --driver=hyperv
E1028 13:04:28.685493    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 13:04:45.593495    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-493500 --memory=2048 --driver=hyperv: (3m25.8413147s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-493500 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-493500 --schedule 5m: (11.298155s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-493500 -n scheduled-stop-493500
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-493500 -n scheduled-stop-493500: exit status 1 (10.0367473s)
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-493500 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-493500 -- sudo systemctl show minikube-scheduled-stop --no-page: (10.1058456s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-493500 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-493500 --schedule 5s: (11.2225881s)
E1028 13:06:39.747381    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-493500
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-493500: exit status 7 (2.6421904s)

                                                
                                                
-- stdout --
	scheduled-stop-493500
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-493500 -n scheduled-stop-493500
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-493500 -n scheduled-stop-493500: exit status 7 (2.5766036s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-493500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-493500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-493500: (27.24906s)
--- PASS: TestScheduledStopWindows (340.98s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1082.6s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.3874560442.exe start -p running-upgrade-767100 --memory=2200 --vm-driver=hyperv
E1028 13:09:45.597086    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.3874560442.exe start -p running-upgrade-767100 --memory=2200 --vm-driver=hyperv: (8m32.6895663s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-767100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E1028 13:16:39.754852    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-767100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (8m21.1050626s)
helpers_test.go:175: Cleaning up "running-upgrade-767100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-767100
E1028 13:24:45.606837    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-767100: (1m7.8321814s)
--- PASS: TestRunningBinaryUpgrade (1082.60s)

                                                
                                    
x
+
TestKubernetesUpgrade (1094.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-767100 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-767100 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (3m38.0632655s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-767100
E1028 13:11:39.750894    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-767100: (35.8056742s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-767100 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-767100 status --format={{.Host}}: exit status 7 (2.6468962s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-767100 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-767100 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=hyperv: (5m55.3632455s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-767100 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-767100 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-767100 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (338.3753ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-767100] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-767100
	    minikube start -p kubernetes-upgrade-767100 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7671002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-767100 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-767100 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=hyperv
E1028 13:19:45.604016    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-767100 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=hyperv: (7m11.3609026s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-767100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-767100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-767100: (50.3536252s)
--- PASS: TestKubernetesUpgrade (1094.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-767100 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-767100 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (359.9715ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-767100] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (884.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.3643207822.exe start -p stopped-upgrade-019500 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.3643207822.exe start -p stopped-upgrade-019500 --memory=2200 --vm-driver=hyperv: (7m42.7824354s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.3643207822.exe -p stopped-upgrade-019500 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.3643207822.exe -p stopped-upgrade-019500 stop: (37.5361746s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-019500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E1028 13:21:08.699507    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-150200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 13:21:39.758328    9608 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-292500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-019500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m24.4877477s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (884.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (9.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-019500
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-019500: (9.565243s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (9.57s)

                                                
                                    

Test skip (32/206)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-150200 --alsologtostderr -v=1]
functional_test.go:916: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-150200 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 4080: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.05s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-150200 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:974: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-150200 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0466724s)

                                                
                                                
-- stdout --
	* [functional-150200] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:17:10.146598   10900 out.go:345] Setting OutFile to fd 1792 ...
	I1028 11:17:10.236164   10900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:17:10.236164   10900 out.go:358] Setting ErrFile to fd 1776...
	I1028 11:17:10.236164   10900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:17:10.266538   10900 out.go:352] Setting JSON to false
	I1028 11:17:10.271047   10900 start.go:129] hostinfo: {"hostname":"minikube6","uptime":162055,"bootTime":1729952174,"procs":207,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W1028 11:17:10.271047   10900 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 11:17:10.278587   10900 out.go:177] * [functional-150200] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1028 11:17:10.282592   10900 notify.go:220] Checking for updates...
	I1028 11:17:10.283261   10900 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 11:17:10.285893   10900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:17:10.288940   10900 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I1028 11:17:10.292095   10900 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:17:10.294905   10900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:17:10.298555   10900 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:17:10.300017   10900 driver.go:394] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:980: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-150200 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-150200 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0274455s)

                                                
                                                
-- stdout --
	* [functional-150200] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:17:15.239180    3168 out.go:345] Setting OutFile to fd 1380 ...
	I1028 11:17:15.361713    3168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:17:15.361713    3168 out.go:358] Setting ErrFile to fd 1712...
	I1028 11:17:15.361713    3168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:17:15.389538    3168 out.go:352] Setting JSON to false
	I1028 11:17:15.397143    3168 start.go:129] hostinfo: {"hostname":"minikube6","uptime":162060,"bootTime":1729952174,"procs":208,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W1028 11:17:15.397143    3168 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 11:17:15.404199    3168 out.go:177] * [functional-150200] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1028 11:17:15.406072    3168 notify.go:220] Checking for updates...
	I1028 11:17:15.408965    3168 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I1028 11:17:15.410964    3168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:17:15.416961    3168 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I1028 11:17:15.418964    3168 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:17:15.425958    3168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:17:15.429961    3168 config.go:182] Loaded profile config "functional-150200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:17:15.430954    3168 driver.go:394] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1025: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard